Affiliations
Division of General Pediatrics, Department of Medicine, Boston Children's Hospital, Harvard Medical School, Boston, Massachusetts
Division of Sleep Medicine, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
Division of Sleep Medicine, Harvard Medical School, Boston, Massachusetts
Email
clandrigan@partners.org
Given name(s)
Christopher P.
Family name
Landrigan
Degrees
MD, MPH

Engaging Families as True Partners During Hospitalization

Article Type
Changed
Wed, 06/13/2018 - 06:53

Communication failures are a leading cause of sentinel events, the most serious adverse events that occur in hospitals.1 Interventions to improve patient safety have focused on communication between healthcare providers.2-4 Interventions focusing on communication between providers and families or other patient caregivers are under-studied.5,6 Given their availability, proximity, historical knowledge, and motivation for a good outcome,7 families can play a vital role as “vigilant partners”8 in promoting hospital communication and safety.

In this month’s Journal of Hospital Medicine, Solan et al. conducted focus groups and interviews of 61 caregivers of hospitalized pediatric patients at 30 days after discharge to assess their perceptions of communication during hospitalization and discharge home.9 They identified several caregiver themes pertaining to communication between the inpatient medical team and families, communication challenges due to the teaching hospital environment, and communication between providers. Caregiver concerns included feeling out of the loop, excessive provider use of medical jargon, confusing messages on rounds, and inadequate communication between inpatient and outpatient providers.

The manuscript serves both to uncover family concerns that may be underappreciated by clinicians and suggest some potential solutions. For instance, caregivers can be apprehensive about whom to call for postdischarge advice because they are sometimes uncertain whether their outpatient providers have sufficient information about the hospitalization to properly advise them. The authors propose using photo “face sheets” to improve caregiver identification of healthcare provider roles, including families in hospital committees, improving transition communication between inpatient and outpatient healthcare providers through timely faxed discharge summaries and telephone calls, and informing families about such communications with their outpatient providers.

These are important suggestions. However, in order to move from promoting communication alone to promoting true partnership in care, there are additional steps that providers can take to fully engage families in hospital and discharge communications.

Meaningful family engagement in hospital communications—eg, during family-centered rounds (FCRs)—has been associated with improved patient safety and experience.10-12 To further enhance family partnership in care, we would make the following 3 suggestions for hospitals and healthcare providers: (1) focus on health literacy in all communications with families, (2) work towards shared decision making (SDM), and (3) make discharges family-centered.

HEALTH LITERACY

In order to partner with one another, families and healthcare providers need to speak a common language. A key way to ensure that families and providers speak a common language is for providers to espouse good health literacy principles. Health literacy is the “capacity to obtain, process, and understand basic health information and services to make appropriate health decisions.”13 Health literacy is dynamic, varying based on medical problem, provider, and healthcare system.14 Overall, only 12% of United States adults possess the health literacy skills required to navigate our complex healthcare system.15,16 Stress, illness, and other factors can compromise the ability of even these individuals to process and utilize health information. Yet health literacy is routinely overestimated by providers.17-19

To optimize communication with families, providers should use “universal health literacy precautions”16 with all patients, not just those believed to need extra assistance, in both verbal (eg, FCRs) and written communications (eg, discharge instructions).16 Providers should speak in plain, nonmedical language, be specific and concrete, and have families engage in “teach-back” (ie, state in their own words their understanding of the plan). They should focus on what families “need to know” rather than what is “good to know.” They should use simpler sentence structure and “chunk and check”20 (ie, provide small, “bite-sized” pieces of information and check for understanding by using teach-back).21 In writing, they should use simpler sentence structure, bullet points, active statements, and be cognizant of reading level, medical jargon, and word choice (eg, “has a fever” instead of “febrile”). It is worth recognizing that even highly educated, highly literate families—not least of all those who are physicians and nurses themselves—can benefit from universal health literacy precautions because the ability to process and grasp information is dynamic and can be markedly lower than usual when faced with the illness of a loved one.

At a systematic level, medical schools, nursing schools, residency training programs, and continuing education should include health literacy training in their curricula. While learning to speak the language of medicine is an important part of medical education, the next step is learning to “unspeak” it, a challenging but important charge to promote partnership.

 

 

SHARED DECISION MAKING

SDM is the process by which providers and patients make decisions together by balancing clinical evidence with patient preferences and values.22 However, despite providers believing they are engaging in SDM,23,24 families report they are often not as involved in SDM as they would like.24-26 Indeed, most hospital communications with families, including FCRs and discharge instructions, typically emphasize information sharing, not SDM. SDM tends to be more commonly applied in outpatient settings.27 To encourage SDM in the hospital setting, patients and families should not only understand communication during FCRs and at discharge but should be encouraged to be active participants in developing care plans,26 no matter how minor the decisions involved.28 SDM can be applied to a variety of discussions, both during hospitalization (eg, initiation of antibiotics, transition from intravenous to oral medications, pursuing imaging) and at discharge (eg, assessing discharge readiness, deciding duration of therapy, formulating follow-up recommendations). Providers will benefit from incorporating information from personal and medical histories that only families possess, resulting in more informed and potentially safer care plans that may be more likely to fit into the family’s life at home. SDM can also ensure patient and family “buy-in” and increase the likelihood of compliance with the shared plan.

FAMILY CENTERED DISCHARGES

Discharge processes often involve multiple redundancies and parallel processes that fail to actively involve families or promote transparency.29 Discharge summaries are typically written in medical jargon and intended for the outpatient provider (who may not receive them in a timely fashion), not the family.30-32 Separate discharge instructions are often provided to families without sufficient attention to health literacy, contingency planning, or individualization (eg, a generic asthma fact sheet).30 Outpatient providers are not always contacted directly about the hospitalization, nor are families always informed when providers are contacted, as Solan et al. describe.

Providers can apply lessons from FCRs to discharge processes, pursuing a similar family-centered, interprofessional approach promoting partnership and transparency. Just as providers engage families during discussions on FCRs, they can engage families in discharge conversations with outpatient providers and nursing colleagues. Indeed, Berry et al. propose a discharge framework that emphasizes involvement of and dialogue between patients, families, and providers as they systematically develop and assess plans for discharge and postdischarge care.33 To accomplish this, inpatient providers can copy families on discharge summaries and other correspondence with outpatient providers (eg, through secure emails or open-source notes such as OpenNotes34-36). Moreover, particularly for complex discharges, inpatient providers can call outpatient providers in the family’s presence or invite outpatient providers to join—via telephone or videoconference—day-of-discharge FCRs or discharge huddles. Such efforts require logistical and pragmatic considerations, as well as culture change, but are not insurmountable and may help address many family concerns around peridischarge communication and care. Such efforts may also promote accountability on the part of families and providers alike, thereby ensuring that families are truly engaged as vigilant partners in care.

As one of us (SC) reflected once when considering her experience navigating healthcare as a parent of 2 children with cystic fibrosis, “We have to make it easier for families to be a true part of their children’s care. When patients and families are true members of the medical team, care is more informed, more targeted, and more safe for everyone.”

Disclosure: Dr. Landrigan has consulted with and holds equity in the I-PASS Patient Safety Institute, a company that seeks to train institutions in best handoff practices and aid in their implementation. Dr. Landrigan is supported in part by the Children’s Hospital Association for his work as an Executive Council member of the Pediatric Research in Inpatient Settings (PRIS) network. Dr. Landrigan has also served as a paid consultant to Virgin Pulse to help develop a Sleep and Health Program. In addition, Dr. Landrigan has received monetary awards, honoraria, and travel reimbursement from multiple academic and professional organizations for teaching and consulting on sleep deprivation, physician performance, handoffs, and safety and has served as an expert witness in cases regarding patient safety and sleep deprivation.

References

1. Sentinel event statistics released for 2014. The Joint Commission. Jt Comm Online. April 2015. http://www.jointcommission.org/assets/1/23/jconline_April_29_15.pdf. Accessed October 6, 2017.
2. Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a handoff program. N Engl J Med. 2014;371(19):1803-1812. doi:10.1056/NEJMsa1405556. PubMed
3. Radhakrishnan K, Jones TL, Weems D, Knight TW, Rice WH. Seamless transitions: achieving patient safety through communication and collaboration. J Patient Saf. 2015. doi:10.1097/PTS.0000000000000168. PubMed
4. Haig KM, Sutton S, Whittington J. SBAR: a shared mental model for improving communication between clinicians. Jt Comm J Qual Patient Saf. 2006;32(3):167-175. PubMed
5. Lingard L, Regehr G, Orser B, et al. Evaluation of a preoperative checklist and team briefing among surgeons, nurses, and anesthesiologists to reduce failures in communication. Arch Surg. 2008;143(1):12-17; discussion 18. doi:10.1001/archsurg.2007.21. PubMed
6. Haynes AB, Weiser TG, Berry WR, et al. A surgical safety checklist to reduce morbidity and mortality in a global population. N Engl J Med. 2009;360(5):491-499. doi:10.1056/NEJMsa0810119. PubMed
7. Hibbard JH, Peters E, Slovic P, Tusler M. Can patients be part of the solution? Views on their role in preventing medical errors. Med Care Res Rev. 2005;62(5):601-616. doi:10.1177/1077558705279313. PubMed
8. Schwappach DL. Review: engaging patients as vigilant partners in safety: a systematic review. Med Care Res Rev. 2010;67(2):119-148. doi:10.1177/1077558709342254. PubMed
9. Solan LG, Beck AF, Shardo SA, et al. Caregiver Perspectives on Communication During Hospitalization at an Academic Pediatric Institution: A Qualitative Study. J Hosp Med. 2017; in press. PubMed
10. Mittal VS, Sigrest T, Ottolini MC, et al. Family-centered rounds on pediatric wards: a PRIS network survey of US and Canadian hospitalists. Pediatrics. 2010;126(1):37-43. doi:10.1542/peds.2009-2364. PubMed
11. Kuo DZ, Sisterhen LL, Sigrest TE, Biazo JM, Aitken ME, Smith CE. Family experiences and pediatric health services use associated with family-centered rounds. Pediatrics. 2012;130(2):299-305. doi:10.1542/peds.2011-2623. PubMed
12. Mittal V, Krieger E, Lee BC, et al. Pediatrics residents’ perspectives on family-centered rounds: a qualitative study at 2 children’s hospitals. J Grad Med Educ. 2013;5(1):81-87. doi:10.4300/JGME-D-11-00314.1. PubMed
13. Ratzan SC, Parker RM. Introduction. In: Selden CR, Zorn M, Ratzan SC, Parker RM, eds. National Library of Medicine current Bibliographies in Medicine: Health Literacy. http://www.nlm.nih.gov/pubs/cbm/hliteracy.html. Accessed October 6, 2017. Vol. NLM. Pub. No. CMB 2000-1. Bethesda, MD: National Institutes of Health, US Department of Health and Human Services; 2000.
14. Baker DW. The Meaning and the Measure of Health Literacy. J Gen Intern Med. 2006;21(8):878-883. doi:10.1111/j.1525-1497.2006.00540.x. PubMed
15. Institute of Medicine (US) Committee on Health Literacy. Health Literacy: A Prescription to End Confusion. Nielsen-Bohlman L, Panzer AM, Kindig DA, eds. Washington, DC: National Academies Press; 2004. http://www.ncbi.nlm.nih.gov/books/NBK216032/.
16. Agency for Healthcare Research and Quality. AHRQ Health Literacy Universal Precautions Toolkit. AHRQ Health Literacy Universal Precautions Toolkit. https://www.ahrq.gov/professionals/quality-patient-safety/quality-resources/tools/literacy-toolkit/index.html. Published May 2017. Accessed October 6, 2017.
17. Bass PF 3rd, Wilson JF, Griffith CH, Barnett DR. Residents’ ability to identify patients with poor literacy skills. Acad Med. 2002;77(10):1039-1041. PubMed
18. Kelly PA, Haidet P. Physician overestimation of patient literacy: a potential source of health care disparities. Patient Educ Couns. 2007;66(1):119-122. doi:10.1016/j.pec.2006.10.007. PubMed
19. Agency for Healthcare Research and Quality. Health Literacy Universal Precautions Toolkit, 2nd Edition. https://www.ahrq.gov/professionals/quality-patient-safety/quality-resources/tools/literacy-toolkit/healthlittoolkit2.html. Published January 30, 2015. Accessed October 6, 2017.
20. NHS The Health Literacy Place | Chunk and check. http://www.healthliteracyplace.org.uk/tools-and-techniques/techniques/chunk-and-check/. Accessed September 28, 2017.
21. Health Literacy: Hidden Barriers and Practical Strategies. https://www.ahrq.gov/professionals/quality-patient-safety/quality-resources/tools/literacy-toolkit/tool3a/index.html. Accessed September 28, 2017.
22. Shared Decision Making Fact Sheet. National Learning Consortium. December 2013. https://www.healthit.gov/sites/default/files/nlc_shared_decision_making_fact_sheet.pdf. Accessed October 3, 2017.
23. Aarthun A, Akerjordet K. Parent participation in decision-making in health-care services for children: an integrative review. J Nurs Manag. 2014;22(2):177-191. doi:10.1111/j.1365- 2834.2012.01457.x. PubMed
24. Alderson P, Hawthorne J, Killen M. Parents’ experiences of sharing neonatal information and decisions: Consent, cost and risk. Soc Sci Med. 2006;62(6):1319-1329. doi:10.1016/j.socscimed.2005.07.035. PubMed
25. Fiks AG, Hughes CC, Gafen A, Guevara JP, Barg FK. Contrasting Parents’ and Pediatricians’ Perspectives on Shared Decision-Making in ADHD. Pediatrics. 2011;127(1):e188-e196. doi:10.1542/peds.2010-1510. PubMed
26. Stiggelbout AM, Van der Weijden T, De Wit MP, et al. Shared decision making: really putting patients at the centre of healthcare. BMJ. 2012;344:e256. doi:10.1136/bmj.e256. PubMed
27. Kon AA, Davidson JE, Morrison W, et al. Shared Decision Making in ICUs: An American College of Critical Care Medicine and American Thoracic Society Policy Statement., Shared Decision Making in Intensive Care Units: An American College of Critical Care Medicine and American Thoracic Society Policy Statement. Crit Care Med. 2016;44(1):188-201. doi:10.1097/CCM.0000000000001396. PubMed
28. Chorney J, Haworth R, Graham ME, Ritchie K, Curran JA, Hong P. Understanding Shared Decision Making in Pediatric Otolaryngology. Otolaryngol Head Neck Surg. 2015;152(5):941-947. doi:10.1177/0194599815574998. PubMed
29. Wibe T, Ekstedt M, Hellesø R. Information practices of health care professionals related to patient discharge from hospital. Inform Health Soc Care. 2015;40(3):198-209. doi:10.3109/17538157.2013.879150. PubMed
30. Kripalani S, Jackson AT, Schnipper JL, Coleman EA. Promoting effective transitions of care at hospital discharge: a review of key issues for hospitalists. J Hosp Med. 2007;2(5):314-323. doi:10.1002/jhm.228. PubMed
31. van Walraven C, Seth R, Laupacis A. Dissemination of discharge summaries. Not reaching follow-up physicians. Can Fam Physician. 2002;48:737-742. PubMed
32. Leyenaar JK, Bergert L, Mallory LA, et al. Pediatric primary care providers’ perspectives regarding hospital discharge communication: a mixed methods analysis. Acad Pediatr. 2015;15(1):61-68. doi:10.1016/j.acap.2014.07.004. PubMed

33. Berry JG, Blaine K, Rogers J, et al. A framework of pediatric hospital discharge care informed by legislation, research, and practice. JAMA Pediatr. 2014;168(10):955-962; quiz 965-966. doi:10.1001/jamapediatrics.2014.891. PubMed
34. Bell SK, Gerard M, Fossa A, et al. A patient feedback reporting tool for OpenNotes: implications for patient-clinician safety and quality partnerships. BMJ Qual Saf. 2017;26(4):312-322. doi:10.1136/bmjqs-2016-006020. PubMed
35. Bell SK, Mejilla R, Anselmo M, et al. When doctors share visit notes with patients: a study of patient and doctor perceptions of documentation errors, safety opportunities and the patient–doctor relationship. BMJ Qual Saf. 2017;26(4):262-270. doi:10.1136/bmjqs-2015-004697. PubMed
36. A Strong Case for Sharing. Open Notes. https://www.opennotes.org/case-for-opennotes/. Accessed September 19, 2017. PubMed

 

 

Article PDF
Issue
Journal of Hospital Medicine 13(5)
Publications
Topics
Page Number
358-360. Published online first January 18, 2018
Sections
Article PDF
Article PDF

Communication failures are a leading cause of sentinel events, the most serious adverse events that occur in hospitals.1 Interventions to improve patient safety have focused on communication between healthcare providers.2-4 Interventions focusing on communication between providers and families or other patient caregivers are under-studied.5,6 Given their availability, proximity, historical knowledge, and motivation for a good outcome,7 families can play a vital role as “vigilant partners”8 in promoting hospital communication and safety.

In this month’s Journal of Hospital Medicine, Solan et al. conducted focus groups and interviews of 61 caregivers of hospitalized pediatric patients at 30 days after discharge to assess their perceptions of communication during hospitalization and discharge home.9 They identified several caregiver themes pertaining to communication between the inpatient medical team and families, communication challenges due to the teaching hospital environment, and communication between providers. Caregiver concerns included feeling out of the loop, excessive provider use of medical jargon, confusing messages on rounds, and inadequate communication between inpatient and outpatient providers.

The manuscript serves both to uncover family concerns that may be underappreciated by clinicians and suggest some potential solutions. For instance, caregivers can be apprehensive about whom to call for postdischarge advice because they are sometimes uncertain whether their outpatient providers have sufficient information about the hospitalization to properly advise them. The authors propose using photo “face sheets” to improve caregiver identification of healthcare provider roles, including families in hospital committees, improving transition communication between inpatient and outpatient healthcare providers through timely faxed discharge summaries and telephone calls, and informing families about such communications with their outpatient providers.

These are important suggestions. However, in order to move from promoting communication alone to promoting true partnership in care, there are additional steps that providers can take to fully engage families in hospital and discharge communications.

Meaningful family engagement in hospital communications—eg, during family-centered rounds (FCRs)—has been associated with improved patient safety and experience.10-12 To further enhance family partnership in care, we would make the following 3 suggestions for hospitals and healthcare providers: (1) focus on health literacy in all communications with families, (2) work towards shared decision making (SDM), and (3) make discharges family-centered.

HEALTH LITERACY

In order to partner with one another, families and healthcare providers need to speak a common language. A key way to ensure that families and providers speak a common language is for providers to espouse good health literacy principles. Health literacy is the “capacity to obtain, process, and understand basic health information and services to make appropriate health decisions.”13 Health literacy is dynamic, varying based on medical problem, provider, and healthcare system.14 Overall, only 12% of United States adults possess the health literacy skills required to navigate our complex healthcare system.15,16 Stress, illness, and other factors can compromise the ability of even these individuals to process and utilize health information. Yet health literacy is routinely overestimated by providers.17-19

To optimize communication with families, providers should use “universal health literacy precautions”16 with all patients, not just those believed to need extra assistance, in both verbal (eg, FCRs) and written communications (eg, discharge instructions).16 Providers should speak in plain, nonmedical language, be specific and concrete, and have families engage in “teach-back” (ie, state in their own words their understanding of the plan). They should focus on what families “need to know” rather than what is “good to know.” They should use simpler sentence structure and “chunk and check”20 (ie, provide small, “bite-sized” pieces of information and check for understanding by using teach-back).21 In writing, they should use simpler sentence structure, bullet points, active statements, and be cognizant of reading level, medical jargon, and word choice (eg, “has a fever” instead of “febrile”). It is worth recognizing that even highly educated, highly literate families—not least of all those who are physicians and nurses themselves—can benefit from universal health literacy precautions because the ability to process and grasp information is dynamic and can be markedly lower than usual when faced with the illness of a loved one.

At a systematic level, medical schools, nursing schools, residency training programs, and continuing education should include health literacy training in their curricula. While learning to speak the language of medicine is an important part of medical education, the next step is learning to “unspeak” it, a challenging but important charge to promote partnership.

 

 

SHARED DECISION MAKING

SDM is the process by which providers and patients make decisions together by balancing clinical evidence with patient preferences and values.22 However, despite providers believing they are engaging in SDM,23,24 families report they are often not as involved in SDM as they would like.24-26 Indeed, most hospital communications with families, including FCRs and discharge instructions, typically emphasize information sharing, not SDM. SDM tends to be more commonly applied in outpatient settings.27 To encourage SDM in the hospital setting, patients and families should not only understand communication during FCRs and at discharge but should be encouraged to be active participants in developing care plans,26 no matter how minor the decisions involved.28 SDM can be applied to a variety of discussions, both during hospitalization (eg, initiation of antibiotics, transition from intravenous to oral medications, pursuing imaging) and at discharge (eg, assessing discharge readiness, deciding duration of therapy, formulating follow-up recommendations). Providers will benefit from incorporating information from personal and medical histories that only families possess, resulting in more informed and potentially safer care plans that may be more likely to fit into the family’s life at home. SDM can also ensure patient and family “buy-in” and increase the likelihood of compliance with the shared plan.

FAMILY CENTERED DISCHARGES

Discharge processes often involve multiple redundancies and parallel processes that fail to actively involve families or promote transparency.29 Discharge summaries are typically written in medical jargon and intended for the outpatient provider (who may not receive them in a timely fashion), not the family.30-32 Separate discharge instructions are often provided to families without sufficient attention to health literacy, contingency planning, or individualization (eg, a generic asthma fact sheet).30 Outpatient providers are not always contacted directly about the hospitalization, nor are families always informed when providers are contacted, as Solan et al. describe.

Providers can apply lessons from FCRs to discharge processes, pursuing a similar family-centered, interprofessional approach promoting partnership and transparency. Just as providers engage families during discussions on FCRs, they can engage families in discharge conversations with outpatient providers and nursing colleagues. Indeed, Berry et al. propose a discharge framework that emphasizes involvement of and dialogue between patients, families, and providers as they systematically develop and assess plans for discharge and postdischarge care.33 To accomplish this, inpatient providers can copy families on discharge summaries and other correspondence with outpatient providers (eg, through secure emails or open-source notes such as OpenNotes34-36). Moreover, particularly for complex discharges, inpatient providers can call outpatient providers in the family’s presence or invite outpatient providers to join—via telephone or videoconference—day-of-discharge FCRs or discharge huddles. Such efforts require logistical and pragmatic considerations, as well as culture change, but are not insurmountable and may help address many family concerns around peridischarge communication and care. Such efforts may also promote accountability on the part of families and providers alike, thereby ensuring that families are truly engaged as vigilant partners in care.

As one of us (SC) reflected once when considering her experience navigating healthcare as a parent of 2 children with cystic fibrosis, “We have to make it easier for families to be a true part of their children’s care. When patients and families are true members of the medical team, care is more informed, more targeted, and more safe for everyone.”

Disclosure: Dr. Landrigan has consulted with and holds equity in the I-PASS Patient Safety Institute, a company that seeks to train institutions in best handoff practices and aid in their implementation. Dr. Landrigan is supported in part by the Children’s Hospital Association for his work as an Executive Council member of the Pediatric Research in Inpatient Settings (PRIS) network. Dr. Landrigan has also served as a paid consultant to Virgin Pulse to help develop a Sleep and Health Program. In addition, Dr. Landrigan has received monetary awards, honoraria, and travel reimbursement from multiple academic and professional organizations for teaching and consulting on sleep deprivation, physician performance, handoffs, and safety and has served as an expert witness in cases regarding patient safety and sleep deprivation.

Communication failures are a leading cause of sentinel events, the most serious adverse events that occur in hospitals.1 Interventions to improve patient safety have focused on communication between healthcare providers.2-4 Interventions focusing on communication between providers and families or other patient caregivers are under-studied.5,6 Given their availability, proximity, historical knowledge, and motivation for a good outcome,7 families can play a vital role as “vigilant partners”8 in promoting hospital communication and safety.

In this month’s Journal of Hospital Medicine, Solan et al. conducted focus groups and interviews of 61 caregivers of hospitalized pediatric patients at 30 days after discharge to assess their perceptions of communication during hospitalization and discharge home.9 They identified several caregiver themes pertaining to communication between the inpatient medical team and families, communication challenges due to the teaching hospital environment, and communication between providers. Caregiver concerns included feeling out of the loop, excessive provider use of medical jargon, confusing messages on rounds, and inadequate communication between inpatient and outpatient providers.

The manuscript serves both to uncover family concerns that may be underappreciated by clinicians and suggest some potential solutions. For instance, caregivers can be apprehensive about whom to call for postdischarge advice because they are sometimes uncertain whether their outpatient providers have sufficient information about the hospitalization to properly advise them. The authors propose using photo “face sheets” to improve caregiver identification of healthcare provider roles, including families in hospital committees, improving transition communication between inpatient and outpatient healthcare providers through timely faxed discharge summaries and telephone calls, and informing families about such communications with their outpatient providers.

These are important suggestions. However, in order to move from promoting communication alone to promoting true partnership in care, there are additional steps that providers can take to fully engage families in hospital and discharge communications.

Meaningful family engagement in hospital communications—eg, during family-centered rounds (FCRs)—has been associated with improved patient safety and experience.10-12 To further enhance family partnership in care, we would make the following 3 suggestions for hospitals and healthcare providers: (1) focus on health literacy in all communications with families, (2) work towards shared decision making (SDM), and (3) make discharges family-centered.

HEALTH LITERACY

In order to partner with one another, families and healthcare providers need to speak a common language. A key way to ensure that families and providers speak a common language is for providers to espouse good health literacy principles. Health literacy is the “capacity to obtain, process, and understand basic health information and services to make appropriate health decisions.”13 Health literacy is dynamic, varying based on medical problem, provider, and healthcare system.14 Overall, only 12% of United States adults possess the health literacy skills required to navigate our complex healthcare system.15,16 Stress, illness, and other factors can compromise the ability of even these individuals to process and utilize health information. Yet health literacy is routinely overestimated by providers.17-19

To optimize communication with families, providers should use “universal health literacy precautions”16 with all patients, not just those believed to need extra assistance, in both verbal (eg, FCRs) and written communications (eg, discharge instructions).16 Providers should speak in plain, nonmedical language, be specific and concrete, and have families engage in “teach-back” (ie, state in their own words their understanding of the plan). They should focus on what families “need to know” rather than what is “good to know.” They should use simpler sentence structure and “chunk and check”20 (ie, provide small, “bite-sized” pieces of information and check for understanding by using teach-back).21 In writing, they should use simpler sentence structure, bullet points, active statements, and be cognizant of reading level, medical jargon, and word choice (eg, “has a fever” instead of “febrile”). It is worth recognizing that even highly educated, highly literate families—not least of all those who are physicians and nurses themselves—can benefit from universal health literacy precautions because the ability to process and grasp information is dynamic and can be markedly lower than usual when faced with the illness of a loved one.

At a systematic level, medical schools, nursing schools, residency training programs, and continuing education should include health literacy training in their curricula. While learning to speak the language of medicine is an important part of medical education, the next step is learning to “unspeak” it, a challenging but important charge to promote partnership.

 

 

SHARED DECISION MAKING

SDM is the process by which providers and patients make decisions together by balancing clinical evidence with patient preferences and values.22 However, despite providers believing they are engaging in SDM,23,24 families report they are often not as involved in SDM as they would like.24-26 Indeed, most hospital communications with families, including FCRs and discharge instructions, typically emphasize information sharing, not SDM. SDM tends to be more commonly applied in outpatient settings.27 To encourage SDM in the hospital setting, patients and families should not only understand communication during FCRs and at discharge but should be encouraged to be active participants in developing care plans,26 no matter how minor the decisions involved.28 SDM can be applied to a variety of discussions, both during hospitalization (eg, initiation of antibiotics, transition from intravenous to oral medications, pursuing imaging) and at discharge (eg, assessing discharge readiness, deciding duration of therapy, formulating follow-up recommendations). Providers will benefit from incorporating information from personal and medical histories that only families possess, resulting in more informed and potentially safer care plans that may be more likely to fit into the family’s life at home. SDM can also ensure patient and family “buy-in” and increase the likelihood of compliance with the shared plan.

FAMILY CENTERED DISCHARGES

Discharge processes often involve multiple redundancies and parallel processes that fail to actively involve families or promote transparency.29 Discharge summaries are typically written in medical jargon and intended for the outpatient provider (who may not receive them in a timely fashion), not the family.30-32 Separate discharge instructions are often provided to families without sufficient attention to health literacy, contingency planning, or individualization (eg, a generic asthma fact sheet).30 Outpatient providers are not always contacted directly about the hospitalization, nor are families always informed when providers are contacted, as Solan et al. describe.

Providers can apply lessons from FCRs to discharge processes, pursuing a similar family-centered, interprofessional approach promoting partnership and transparency. Just as providers engage families during discussions on FCRs, they can engage families in discharge conversations with outpatient providers and nursing colleagues. Indeed, Berry et al. propose a discharge framework that emphasizes involvement of and dialogue between patients, families, and providers as they systematically develop and assess plans for discharge and postdischarge care.33 To accomplish this, inpatient providers can copy families on discharge summaries and other correspondence with outpatient providers (eg, through secure emails or open-source notes such as OpenNotes34-36). Moreover, particularly for complex discharges, inpatient providers can call outpatient providers in the family’s presence or invite outpatient providers to join—via telephone or videoconference—day-of-discharge FCRs or discharge huddles. Such efforts require logistical and pragmatic considerations, as well as culture change, but are not insurmountable and may help address many family concerns around peridischarge communication and care. Such efforts may also promote accountability on the part of families and providers alike, thereby ensuring that families are truly engaged as vigilant partners in care.

As one of us (SC) reflected once when considering her experience navigating healthcare as a parent of 2 children with cystic fibrosis, “We have to make it easier for families to be a true part of their children’s care. When patients and families are true members of the medical team, care is more informed, more targeted, and more safe for everyone.”

Disclosure: Dr. Landrigan has consulted with and holds equity in the I-PASS Patient Safety Institute, a company that seeks to train institutions in best handoff practices and aid in their implementation. Dr. Landrigan is supported in part by the Children’s Hospital Association for his work as an Executive Council member of the Pediatric Research in Inpatient Settings (PRIS) network. Dr. Landrigan has also served as a paid consultant to Virgin Pulse to help develop a Sleep and Health Program. In addition, Dr. Landrigan has received monetary awards, honoraria, and travel reimbursement from multiple academic and professional organizations for teaching and consulting on sleep deprivation, physician performance, handoffs, and safety and has served as an expert witness in cases regarding patient safety and sleep deprivation.

References

1. Sentinel event statistics released for 2014. The Joint Commission. Jt Comm Online. April 2015. http://www.jointcommission.org/assets/1/23/jconline_April_29_15.pdf. Accessed October 6, 2017.
2. Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a handoff program. N Engl J Med. 2014;371(19):1803-1812. doi:10.1056/NEJMsa1405556. PubMed
3. Radhakrishnan K, Jones TL, Weems D, Knight TW, Rice WH. Seamless transitions: achieving patient safety through communication and collaboration. J Patient Saf. 2015. doi:10.1097/PTS.0000000000000168. PubMed
4. Haig KM, Sutton S, Whittington J. SBAR: a shared mental model for improving communication between clinicians. Jt Comm J Qual Patient Saf. 2006;32(3):167-175. PubMed
5. Lingard L, Regehr G, Orser B, et al. Evaluation of a preoperative checklist and team briefing among surgeons, nurses, and anesthesiologists to reduce failures in communication. Arch Surg. 2008;143(1):12-17; discussion 18. doi:10.1001/archsurg.2007.21. PubMed
6. Haynes AB, Weiser TG, Berry WR, et al. A surgical safety checklist to reduce morbidity and mortality in a global population. N Engl J Med. 2009;360(5):491-499. doi:10.1056/NEJMsa0810119. PubMed
7. Hibbard JH, Peters E, Slovic P, Tusler M. Can patients be part of the solution? Views on their role in preventing medical errors. Med Care Res Rev. 2005;62(5):601-616. doi:10.1177/1077558705279313. PubMed
8. Schwappach DL. Review: engaging patients as vigilant partners in safety: a systematic review. Med Care Res Rev. 2010;67(2):119-148. doi:10.1177/1077558709342254. PubMed
9. Solan LG, Beck AF, Shardo SA, et al. Caregiver Perspectives on Communication During Hospitalization at an Academic Pediatric Institution: A Qualitative Study. J Hosp Med. 2017; in press. PubMed
10. Mittal VS, Sigrest T, Ottolini MC, et al. Family-centered rounds on pediatric wards: a PRIS network survey of US and Canadian hospitalists. Pediatrics. 2010;126(1):37-43. doi:10.1542/peds.2009-2364. PubMed
11. Kuo DZ, Sisterhen LL, Sigrest TE, Biazo JM, Aitken ME, Smith CE. Family experiences and pediatric health services use associated with family-centered rounds. Pediatrics. 2012;130(2):299-305. doi:10.1542/peds.2011-2623. PubMed
12. Mittal V, Krieger E, Lee BC, et al. Pediatrics residents’ perspectives on family-centered rounds: a qualitative study at 2 children’s hospitals. J Grad Med Educ. 2013;5(1):81-87. doi:10.4300/JGME-D-11-00314.1. PubMed
13. Ratzan SC, Parker RM. Introduction. In: Selden CR, Zorn M, Ratzan SC, Parker RM, eds. National Library of Medicine current Bibliographies in Medicine: Health Literacy. http://www.nlm.nih.gov/pubs/cbm/hliteracy.html. Accessed October 6, 2017. Vol. NLM. Pub. No. CMB 2000-1. Bethesda, MD: National Institutes of Health, US Department of Health and Human Services; 2000.
14. Baker DW. The Meaning and the Measure of Health Literacy. J Gen Intern Med. 2006;21(8):878-883. doi:10.1111/j.1525-1497.2006.00540.x. PubMed
15. Institute of Medicine (US) Committee on Health Literacy. Health Literacy: A Prescription to End Confusion. Nielsen-Bohlman L, Panzer AM, Kindig DA, eds. Washington, DC: National Academies Press; 2004. http://www.ncbi.nlm.nih.gov/books/NBK216032/.
16. Agency for Healthcare Research and Quality. AHRQ Health Literacy Universal Precautions Toolkit. AHRQ Health Literacy Universal Precautions Toolkit. https://www.ahrq.gov/professionals/quality-patient-safety/quality-resources/tools/literacy-toolkit/index.html. Published May 2017. Accessed October 6, 2017.
17. Bass PF 3rd, Wilson JF, Griffith CH, Barnett DR. Residents’ ability to identify patients with poor literacy skills. Acad Med. 2002;77(10):1039-1041. PubMed
18. Kelly PA, Haidet P. Physician overestimation of patient literacy: a potential source of health care disparities. Patient Educ Couns. 2007;66(1):119-122. doi:10.1016/j.pec.2006.10.007. PubMed
19. Agency for Healthcare Research and Quality. Health Literacy Universal Precautions Toolkit, 2nd Edition. https://www.ahrq.gov/professionals/quality-patient-safety/quality-resources/tools/literacy-toolkit/healthlittoolkit2.html. Published January 30, 2015. Accessed October 6, 2017.
20. NHS The Health Literacy Place | Chunk and check. http://www.healthliteracyplace.org.uk/tools-and-techniques/techniques/chunk-and-check/. Accessed September 28, 2017.
21. Health Literacy: Hidden Barriers and Practical Strategies. https://www.ahrq.gov/professionals/quality-patient-safety/quality-resources/tools/literacy-toolkit/tool3a/index.html. Accessed September 28, 2017.
22. Shared Decision Making Fact Sheet. National Learning Consortium. December 2013. https://www.healthit.gov/sites/default/files/nlc_shared_decision_making_fact_sheet.pdf. Accessed October 3, 2017.
23. Aarthun A, Akerjordet K. Parent participation in decision-making in health-care services for children: an integrative review. J Nurs Manag. 2014;22(2):177-191. doi:10.1111/j.1365- 2834.2012.01457.x. PubMed
24. Alderson P, Hawthorne J, Killen M. Parents’ experiences of sharing neonatal information and decisions: Consent, cost and risk. Soc Sci Med. 2006;62(6):1319-1329. doi:10.1016/j.socscimed.2005.07.035. PubMed
25. Fiks AG, Hughes CC, Gafen A, Guevara JP, Barg FK. Contrasting Parents’ and Pediatricians’ Perspectives on Shared Decision-Making in ADHD. Pediatrics. 2011;127(1):e188-e196. doi:10.1542/peds.2010-1510. PubMed
26. Stiggelbout AM, Van der Weijden T, De Wit MP, et al. Shared decision making: really putting patients at the centre of healthcare. BMJ. 2012;344:e256. doi:10.1136/bmj.e256. PubMed
27. Kon AA, Davidson JE, Morrison W, et al. Shared Decision Making in ICUs: An American College of Critical Care Medicine and American Thoracic Society Policy Statement., Shared Decision Making in Intensive Care Units: An American College of Critical Care Medicine and American Thoracic Society Policy Statement. Crit Care Med. 2016;44(1):188-201. doi:10.1097/CCM.0000000000001396. PubMed
28. Chorney J, Haworth R, Graham ME, Ritchie K, Curran JA, Hong P. Understanding Shared Decision Making in Pediatric Otolaryngology. Otolaryngol Head Neck Surg. 2015;152(5):941-947. doi:10.1177/0194599815574998. PubMed
29. Wibe T, Ekstedt M, Hellesø R. Information practices of health care professionals related to patient discharge from hospital. Inform Health Soc Care. 2015;40(3):198-209. doi:10.3109/17538157.2013.879150. PubMed
30. Kripalani S, Jackson AT, Schnipper JL, Coleman EA. Promoting effective transitions of care at hospital discharge: a review of key issues for hospitalists. J Hosp Med. 2007;2(5):314-323. doi:10.1002/jhm.228. PubMed
31. van Walraven C, Seth R, Laupacis A. Dissemination of discharge summaries. Not reaching follow-up physicians. Can Fam Physician. 2002;48:737-742. PubMed
32. Leyenaar JK, Bergert L, Mallory LA, et al. Pediatric primary care providers’ perspectives regarding hospital discharge communication: a mixed methods analysis. Acad Pediatr. 2015;15(1):61-68. doi:10.1016/j.acap.2014.07.004. PubMed

33. Berry JG, Blaine K, Rogers J, et al. A framework of pediatric hospital discharge care informed by legislation, research, and practice. JAMA Pediatr. 2014;168(10):955-962; quiz 965-966. doi:10.1001/jamapediatrics.2014.891. PubMed
34. Bell SK, Gerard M, Fossa A, et al. A patient feedback reporting tool for OpenNotes: implications for patient-clinician safety and quality partnerships. BMJ Qual Saf. 2017;26(4):312-322. doi:10.1136/bmjqs-2016-006020. PubMed
35. Bell SK, Mejilla R, Anselmo M, et al. When doctors share visit notes with patients: a study of patient and doctor perceptions of documentation errors, safety opportunities and the patient–doctor relationship. BMJ Qual Saf. 2017;26(4):262-270. doi:10.1136/bmjqs-2015-004697. PubMed
36. A Strong Case for Sharing. Open Notes. https://www.opennotes.org/case-for-opennotes/. Accessed September 19, 2017. PubMed

 

 

References

1. Sentinel event statistics released for 2014. The Joint Commission. Jt Comm Online. April 2015. http://www.jointcommission.org/assets/1/23/jconline_April_29_15.pdf. Accessed October 6, 2017.
2. Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a handoff program. N Engl J Med. 2014;371(19):1803-1812. doi:10.1056/NEJMsa1405556. PubMed
3. Radhakrishnan K, Jones TL, Weems D, Knight TW, Rice WH. Seamless transitions: achieving patient safety through communication and collaboration. J Patient Saf. 2015. doi:10.1097/PTS.0000000000000168. PubMed
4. Haig KM, Sutton S, Whittington J. SBAR: a shared mental model for improving communication between clinicians. Jt Comm J Qual Patient Saf. 2006;32(3):167-175. PubMed
5. Lingard L, Regehr G, Orser B, et al. Evaluation of a preoperative checklist and team briefing among surgeons, nurses, and anesthesiologists to reduce failures in communication. Arch Surg. 2008;143(1):12-17; discussion 18. doi:10.1001/archsurg.2007.21. PubMed
6. Haynes AB, Weiser TG, Berry WR, et al. A surgical safety checklist to reduce morbidity and mortality in a global population. N Engl J Med. 2009;360(5):491-499. doi:10.1056/NEJMsa0810119. PubMed
7. Hibbard JH, Peters E, Slovic P, Tusler M. Can patients be part of the solution? Views on their role in preventing medical errors. Med Care Res Rev. 2005;62(5):601-616. doi:10.1177/1077558705279313. PubMed
8. Schwappach DL. Review: engaging patients as vigilant partners in safety: a systematic review. Med Care Res Rev. 2010;67(2):119-148. doi:10.1177/1077558709342254. PubMed
9. Solan LG, Beck AF, Shardo SA, et al. Caregiver Perspectives on Communication During Hospitalization at an Academic Pediatric Institution: A Qualitative Study. J Hosp Med. 2017; in press. PubMed
10. Mittal VS, Sigrest T, Ottolini MC, et al. Family-centered rounds on pediatric wards: a PRIS network survey of US and Canadian hospitalists. Pediatrics. 2010;126(1):37-43. doi:10.1542/peds.2009-2364. PubMed
11. Kuo DZ, Sisterhen LL, Sigrest TE, Biazo JM, Aitken ME, Smith CE. Family experiences and pediatric health services use associated with family-centered rounds. Pediatrics. 2012;130(2):299-305. doi:10.1542/peds.2011-2623. PubMed
12. Mittal V, Krieger E, Lee BC, et al. Pediatrics residents’ perspectives on family-centered rounds: a qualitative study at 2 children’s hospitals. J Grad Med Educ. 2013;5(1):81-87. doi:10.4300/JGME-D-11-00314.1. PubMed
13. Ratzan SC, Parker RM. Introduction. In: Selden CR, Zorn M, Ratzan SC, Parker RM, eds. National Library of Medicine current Bibliographies in Medicine: Health Literacy. http://www.nlm.nih.gov/pubs/cbm/hliteracy.html. Accessed October 6, 2017. Vol. NLM. Pub. No. CMB 2000-1. Bethesda, MD: National Institutes of Health, US Department of Health and Human Services; 2000.
14. Baker DW. The Meaning and the Measure of Health Literacy. J Gen Intern Med. 2006;21(8):878-883. doi:10.1111/j.1525-1497.2006.00540.x. PubMed
15. Institute of Medicine (US) Committee on Health Literacy. Health Literacy: A Prescription to End Confusion. Nielsen-Bohlman L, Panzer AM, Kindig DA, eds. Washington, DC: National Academies Press; 2004. http://www.ncbi.nlm.nih.gov/books/NBK216032/.
16. Agency for Healthcare Research and Quality. AHRQ Health Literacy Universal Precautions Toolkit. AHRQ Health Literacy Universal Precautions Toolkit. https://www.ahrq.gov/professionals/quality-patient-safety/quality-resources/tools/literacy-toolkit/index.html. Published May 2017. Accessed October 6, 2017.
17. Bass PF 3rd, Wilson JF, Griffith CH, Barnett DR. Residents’ ability to identify patients with poor literacy skills. Acad Med. 2002;77(10):1039-1041. PubMed
18. Kelly PA, Haidet P. Physician overestimation of patient literacy: a potential source of health care disparities. Patient Educ Couns. 2007;66(1):119-122. doi:10.1016/j.pec.2006.10.007. PubMed
19. Agency for Healthcare Research and Quality. Health Literacy Universal Precautions Toolkit, 2nd Edition. https://www.ahrq.gov/professionals/quality-patient-safety/quality-resources/tools/literacy-toolkit/healthlittoolkit2.html. Published January 30, 2015. Accessed October 6, 2017.
20. NHS The Health Literacy Place | Chunk and check. http://www.healthliteracyplace.org.uk/tools-and-techniques/techniques/chunk-and-check/. Accessed September 28, 2017.
21. Health Literacy: Hidden Barriers and Practical Strategies. https://www.ahrq.gov/professionals/quality-patient-safety/quality-resources/tools/literacy-toolkit/tool3a/index.html. Accessed September 28, 2017.
22. Shared Decision Making Fact Sheet. National Learning Consortium. December 2013. https://www.healthit.gov/sites/default/files/nlc_shared_decision_making_fact_sheet.pdf. Accessed October 3, 2017.
23. Aarthun A, Akerjordet K. Parent participation in decision-making in health-care services for children: an integrative review. J Nurs Manag. 2014;22(2):177-191. doi:10.1111/j.1365- 2834.2012.01457.x. PubMed
24. Alderson P, Hawthorne J, Killen M. Parents’ experiences of sharing neonatal information and decisions: Consent, cost and risk. Soc Sci Med. 2006;62(6):1319-1329. doi:10.1016/j.socscimed.2005.07.035. PubMed
25. Fiks AG, Hughes CC, Gafen A, Guevara JP, Barg FK. Contrasting Parents’ and Pediatricians’ Perspectives on Shared Decision-Making in ADHD. Pediatrics. 2011;127(1):e188-e196. doi:10.1542/peds.2010-1510. PubMed
26. Stiggelbout AM, Van der Weijden T, De Wit MP, et al. Shared decision making: really putting patients at the centre of healthcare. BMJ. 2012;344:e256. doi:10.1136/bmj.e256. PubMed
27. Kon AA, Davidson JE, Morrison W, et al. Shared Decision Making in ICUs: An American College of Critical Care Medicine and American Thoracic Society Policy Statement., Shared Decision Making in Intensive Care Units: An American College of Critical Care Medicine and American Thoracic Society Policy Statement. Crit Care Med. 2016;44(1):188-201. doi:10.1097/CCM.0000000000001396. PubMed
28. Chorney J, Haworth R, Graham ME, Ritchie K, Curran JA, Hong P. Understanding Shared Decision Making in Pediatric Otolaryngology. Otolaryngol Head Neck Surg. 2015;152(5):941-947. doi:10.1177/0194599815574998. PubMed
29. Wibe T, Ekstedt M, Hellesø R. Information practices of health care professionals related to patient discharge from hospital. Inform Health Soc Care. 2015;40(3):198-209. doi:10.3109/17538157.2013.879150. PubMed
30. Kripalani S, Jackson AT, Schnipper JL, Coleman EA. Promoting effective transitions of care at hospital discharge: a review of key issues for hospitalists. J Hosp Med. 2007;2(5):314-323. doi:10.1002/jhm.228. PubMed
31. van Walraven C, Seth R, Laupacis A. Dissemination of discharge summaries. Not reaching follow-up physicians. Can Fam Physician. 2002;48:737-742. PubMed
32. Leyenaar JK, Bergert L, Mallory LA, et al. Pediatric primary care providers’ perspectives regarding hospital discharge communication: a mixed methods analysis. Acad Pediatr. 2015;15(1):61-68. doi:10.1016/j.acap.2014.07.004. PubMed

33. Berry JG, Blaine K, Rogers J, et al. A framework of pediatric hospital discharge care informed by legislation, research, and practice. JAMA Pediatr. 2014;168(10):955-962; quiz 965-966. doi:10.1001/jamapediatrics.2014.891. PubMed
34. Bell SK, Gerard M, Fossa A, et al. A patient feedback reporting tool for OpenNotes: implications for patient-clinician safety and quality partnerships. BMJ Qual Saf. 2017;26(4):312-322. doi:10.1136/bmjqs-2016-006020. PubMed
35. Bell SK, Mejilla R, Anselmo M, et al. When doctors share visit notes with patients: a study of patient and doctor perceptions of documentation errors, safety opportunities and the patient–doctor relationship. BMJ Qual Saf. 2017;26(4):262-270. doi:10.1136/bmjqs-2015-004697. PubMed
36. A Strong Case for Sharing. Open Notes. https://www.opennotes.org/case-for-opennotes/. Accessed September 19, 2017. PubMed

 

 

Issue
Journal of Hospital Medicine 13(5)
Issue
Journal of Hospital Medicine 13(5)
Page Number
358-360. Published online first January 18, 2018
Page Number
358-360. Published online first January 18, 2018
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Alisa Khan, MD, MPH, Boston Children’s Hospital, 21 Autumn St., Rm 200.2, Boston, MA 02215; Telephone: 617-355-2565; Fax: 617-730-0957; E-mail: alisa.khan@childrens.harvard.edu
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 06/13/2018 - 06:00
Un-Gate On Date
Wed, 05/09/2018 - 06:00
Use ProPublica
Gating Strategy
First Peek Free
Article PDF Media

Alarm Fatigue

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Alarm fatigue: Clearing the air

Alarm fatigue is not a new issue for hospitals. In a commentary written over 3 decades ago, Kerr and Hayes described what they saw as an alarming issue developing in intensive care units.[1] Recently multiple organizations, including The Joint Commission and the Emergency Care Research Institute have called out alarm fatigue as a patient safety problem,[2, 3, 4] and organizations such as the American Academy of Pediatrics and the American Heart Association are backing away from recommendations for continuous monitoring.[5, 6] Hospitals are in a scramble to set up alarm committees and address alarms locally as recommended by The Joint Commission.[2] In this issue of the Journal of Hospital Medicine, Paine and colleagues set out to review the small but growing body of literature addressing physiologic monitor alarms and interventions that have tried to address alarm fatigue.[7]

After searching through 4629 titles, the authors found 32 articles addressing their key questions: What proportion of alarms are actionable? What is the relationship between clinicians' alarm exposure and response time? Which interventions are effective for reducing alarm rates? The majority of studies identified were observational, with only 8 studies addressing interventions to reduce alarms. Many of the identified studies occurred in units taking care of adults, though 10 descriptive studies and 1 intervention study occurred in pediatric settings. Perhaps the most concerning finding of all, though not surprising to those who work in the hospital setting, was that somewhere between <1% and 26% of alarms across all studies were considered actionable. Although only specifically addressed in 2 studies, the issue of alarm fatigue (i.e., more alarms leading to slower and sometimes absent clinician response) was supported in both, with nurses having slower responses when exposed to a higher numbers of alarms.[8, 9]

The authors note several limitations of their work, one of which is the modest body of literature on the topic. Although several interventions, including widening alarm parameters, increasing alarm delays, and using disposable leads or daily lead changes, have early evidence of success in safely reducing unnecessary alarms, the heterogeneity of this literature precluded a meta‐analysis. Further, the lack of standard definitions and the variety of methods of determining alarm validity make comparison across studies challenging. For this reason, the authors note that they did not distinguish nuisance alarms (i.e., alarms that accurately reflect the patient condition but do not require any intervention) from invalid alarms (i.e., alarms that do not correctly reflect the patient condition). This is relevant because it is likely that interventions to reduce invalid alarms (e.g., frequent lead changes) may be distinct from those that will successfully address nuisance alarms (e.g., widening alarm limits). It is also important to note that although patient safety is of paramount importance, there were other negative consequences of alarms that the authors did not address in this systemic review. Moreover, although avoiding unrecognized deterioration should be a primary goal of any program to reduce alarm fatigue, death remains uncommon compared to the number of patients, families, and healthcare workers exposed to high numbers of alarms during hospitalization. The high number of nonactionable alarms suggests that part of the burden of this problem may lie in more difficult to quantify outcomes such as sleep quality,[10, 11, 12] patient and parent quality of life during hospitalization,[13, 14] and interrupted tasks and cognitive work of healthcare providers.[15]

Paine and colleagues' review has some certain and some less certain implications for the future of alarm research. First, there is an imminent need for researchers and improvers to develop a consensus around terminology and metrics. We need to agree on what is and is not an actionable alarm, and we need valid and sensitive metrics to better understand the consequences of not monitoring a patient who should be on monitors. Second, hospitals addressing alarm fatigue need benchmarks. As hospitals rush to comply with The Joint Commission National Patient Safety Goals for alarm management,[2] it is safe to say that our goal should not be zero alarms, but how low do you go? What can we consider a safe number of alarms in our hospitals? Smart alarms hold tremendous potential to improve the sensitivity and positive predictive value of alarms. However, their ultimate success is dependent on engineers in industry to develop the technology as well as researchers in the hospital setting to validate the technology's performance in clinical care. Additionally, hospitals need to know which interventions are most effective to implement and how to reliably implement these in daily practice. What seems less certain is what type of research is best suited to address this need. The authors recommend randomized trials as an immediate next step, and certainly trials are the gold standard in determining efficacy. However, trials may overstate effectiveness as complex bundled interventions play out in complex and dynamic hospital systems. Quasiexperimental study designs, including time series and step‐wedge designs, would allow for further scientific discovery, such as which interventions are most effective in certain patient populations, while describing reliable implementation of effective methods that lead to lower alarms rates. In both classical randomized controlled trials and quasiexperiments, factorial designs[16, 17] could give us a better understanding of both the comparative effect and any interaction between interventions.

Alarm fatigue is a widespread problem that has negative effects for patients, families, nurses, and physicians. This review demonstrates that the great majority of alarms do not help clinicians and likely contribute to alarm fatigue. The opportunity to improve care is unquestionably vast, and attention from The Joint Commission and the lay press ensures change will occur. What is critical now is for hospitalists, intensivists, nurses, researchers, and hospital administrators to find the right combination of scientific discovery, thoughtful collaboration with industry, and quality improvement that will inform the literature on which interventions worked, how, and in what setting, and ultimately lead to safer (and quieter) hospitals.

Disclosures

Dr. Brady is supported by the Agency for Healthcare Research and Quality under award number K08HS023827. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality. Dr. Landrigan is supported in part by the Children's Hospital Association for his work as an executive council member of the Pediatric Research in Inpatient Settings network. Dr. Landrigan serves as a consultant to Virgin Pulse regarding sleep, safety, and health. In addition, Dr. Landrigan has received monetary awards, honoraria, and travel reimbursement from multiple academic and professional organizations for delivering lectures on sleep deprivation, physician performance, handoffs, and patient safety, and has served as an expert witness in cases regarding patient safety. The authors report no other funding, financial relationships, or conflicts of interest.

References
  1. Kerr JH, Hayes B. An “alarming” situation in the intensive therapy unit. Intensive Care Med. 1983;9(3):103104.
  2. The Joint Commission. National Patient Safety Goal on Alarm Management. Available at: http://www.jointcommission.org/assets/1/18/JCP0713_Announce_New_NSPG.pdf. Accessed October 23, 2015.
  3. Joint Commission. Medical device alarm safety in hospitals. Sentinel Event Alert. 2013;(50):13.
  4. Top 10 health technology hazards for 2014. Health Devices. 2013;42(11):354380.
  5. Ralston SL, Lieberthal AS, Meissner HC, et al. Clinical practice guideline: the diagnosis, management, and prevention of bronchiolitis. Pediatrics. 2014;134(5):e1474e1502.
  6. Drew BJ, Califf RM, Funk M, et al. Practice standards for electrocardiographic monitoring in hospital settings: an American Heart Association scientific statement from the Councils on Cardiovascular Nursing, Clinical Cardiology, and Cardiovascular Disease in the Young: endorsed by the International Society of Computerized Electrocardiology and the American Association of Critical‐Care Nurses. Circulation. 2004;110(17):27212746.
  7. Paine CW, Goel VV, Ely E, Stave CD, Stemler S, Zander M, Bonafide CP. Systematic review of physiologic monitor alarm characteristics and pragmatic interventions to reduce alarm frequency. J Hosp Med. 2016;11(2):136144.
  8. Voepel‐Lewis T, Parker ML, Burke CN, et al. Pulse oximetry desaturation alarms on a general postoperative adult unit: a prospective observational study of nurse response time. Int J Nurs Stud. 2013;50(10):13511358.
  9. Bonafide CP, Lin R, Zander M, et al. Association between exposure to nonactionable physiologic monitor alarms and response time in a children's hospital. J Hosp Med. 2015;10(6):345351.
  10. McCann D. Sleep deprivation is an additional stress for parents staying in hospital. J Spec Pediatr Nurs. 2008;13(2):111122.
  11. Yamanaka H, Haruna J, Mashimo T, Akita T, Kinouchi K. The sound intensity and characteristics of variable‐pitch pulse oximeters. J Clin Monit Comput. 2008;22(3):199207.
  12. Stremler R, Dhukai Z, Wong L, Parshuram C. Factors influencing sleep for parents of critically ill hospitalised children: a qualitative analysis. Intensive Crit Care Nurs. 2011;27(1):3745.
  13. Miles MS, Burchinal P, Holditch‐Davis D, Brunssen S, Wilson SM. Perceptions of stress, worry, and support in Black and White mothers of hospitalized, medically fragile infants. J Pediatr Nurs. 2002;17(2):8288.
  14. Busse M, Stromgren K, Thorngate L, Thomas KA. Parents' responses to stress in the neonatal intensive care unit. Crit Care Nurs. 2013;33(4):5259; quiz 60.
  15. Deb S, Claudio D. Alarm fatigue and its influence on staff performance. IIE Trans Healthc Syst Eng. 2015;5(3):183196.
  16. Moen RD, Nolan TW, Provost LP. Quality Improvement Through Planned Experimentation. 3rd ed. New York, NY: McGraw‐Hill; 1991.
  17. Provost LP, Murray SK. The Health Care Data Guide: Learning From Data for Improvement. San Francisco, CA: Jossey‐Bass; 2011.
Article PDF
Issue
Journal of Hospital Medicine - 11(2)
Publications
Page Number
153-154
Sections
Article PDF
Article PDF

Alarm fatigue is not a new issue for hospitals. In a commentary written over 3 decades ago, Kerr and Hayes described what they saw as an alarming issue developing in intensive care units.[1] Recently multiple organizations, including The Joint Commission and the Emergency Care Research Institute have called out alarm fatigue as a patient safety problem,[2, 3, 4] and organizations such as the American Academy of Pediatrics and the American Heart Association are backing away from recommendations for continuous monitoring.[5, 6] Hospitals are in a scramble to set up alarm committees and address alarms locally as recommended by The Joint Commission.[2] In this issue of the Journal of Hospital Medicine, Paine and colleagues set out to review the small but growing body of literature addressing physiologic monitor alarms and interventions that have tried to address alarm fatigue.[7]

After searching through 4629 titles, the authors found 32 articles addressing their key questions: What proportion of alarms are actionable? What is the relationship between clinicians' alarm exposure and response time? Which interventions are effective for reducing alarm rates? The majority of studies identified were observational, with only 8 studies addressing interventions to reduce alarms. Many of the identified studies occurred in units taking care of adults, though 10 descriptive studies and 1 intervention study occurred in pediatric settings. Perhaps the most concerning finding of all, though not surprising to those who work in the hospital setting, was that somewhere between <1% and 26% of alarms across all studies were considered actionable. Although only specifically addressed in 2 studies, the issue of alarm fatigue (i.e., more alarms leading to slower and sometimes absent clinician response) was supported in both, with nurses having slower responses when exposed to a higher numbers of alarms.[8, 9]

The authors note several limitations of their work, one of which is the modest body of literature on the topic. Although several interventions, including widening alarm parameters, increasing alarm delays, and using disposable leads or daily lead changes, have early evidence of success in safely reducing unnecessary alarms, the heterogeneity of this literature precluded a meta‐analysis. Further, the lack of standard definitions and the variety of methods of determining alarm validity make comparison across studies challenging. For this reason, the authors note that they did not distinguish nuisance alarms (i.e., alarms that accurately reflect the patient condition but do not require any intervention) from invalid alarms (i.e., alarms that do not correctly reflect the patient condition). This is relevant because it is likely that interventions to reduce invalid alarms (e.g., frequent lead changes) may be distinct from those that will successfully address nuisance alarms (e.g., widening alarm limits). It is also important to note that although patient safety is of paramount importance, there were other negative consequences of alarms that the authors did not address in this systemic review. Moreover, although avoiding unrecognized deterioration should be a primary goal of any program to reduce alarm fatigue, death remains uncommon compared to the number of patients, families, and healthcare workers exposed to high numbers of alarms during hospitalization. The high number of nonactionable alarms suggests that part of the burden of this problem may lie in more difficult to quantify outcomes such as sleep quality,[10, 11, 12] patient and parent quality of life during hospitalization,[13, 14] and interrupted tasks and cognitive work of healthcare providers.[15]

Paine and colleagues' review has some certain and some less certain implications for the future of alarm research. First, there is an imminent need for researchers and improvers to develop a consensus around terminology and metrics. We need to agree on what is and is not an actionable alarm, and we need valid and sensitive metrics to better understand the consequences of not monitoring a patient who should be on monitors. Second, hospitals addressing alarm fatigue need benchmarks. As hospitals rush to comply with The Joint Commission National Patient Safety Goals for alarm management,[2] it is safe to say that our goal should not be zero alarms, but how low do you go? What can we consider a safe number of alarms in our hospitals? Smart alarms hold tremendous potential to improve the sensitivity and positive predictive value of alarms. However, their ultimate success is dependent on engineers in industry to develop the technology as well as researchers in the hospital setting to validate the technology's performance in clinical care. Additionally, hospitals need to know which interventions are most effective to implement and how to reliably implement these in daily practice. What seems less certain is what type of research is best suited to address this need. The authors recommend randomized trials as an immediate next step, and certainly trials are the gold standard in determining efficacy. However, trials may overstate effectiveness as complex bundled interventions play out in complex and dynamic hospital systems. Quasiexperimental study designs, including time series and step‐wedge designs, would allow for further scientific discovery, such as which interventions are most effective in certain patient populations, while describing reliable implementation of effective methods that lead to lower alarms rates. In both classical randomized controlled trials and quasiexperiments, factorial designs[16, 17] could give us a better understanding of both the comparative effect and any interaction between interventions.

Alarm fatigue is a widespread problem that has negative effects for patients, families, nurses, and physicians. This review demonstrates that the great majority of alarms do not help clinicians and likely contribute to alarm fatigue. The opportunity to improve care is unquestionably vast, and attention from The Joint Commission and the lay press ensures change will occur. What is critical now is for hospitalists, intensivists, nurses, researchers, and hospital administrators to find the right combination of scientific discovery, thoughtful collaboration with industry, and quality improvement that will inform the literature on which interventions worked, how, and in what setting, and ultimately lead to safer (and quieter) hospitals.

Disclosures

Dr. Brady is supported by the Agency for Healthcare Research and Quality under award number K08HS023827. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality. Dr. Landrigan is supported in part by the Children's Hospital Association for his work as an executive council member of the Pediatric Research in Inpatient Settings network. Dr. Landrigan serves as a consultant to Virgin Pulse regarding sleep, safety, and health. In addition, Dr. Landrigan has received monetary awards, honoraria, and travel reimbursement from multiple academic and professional organizations for delivering lectures on sleep deprivation, physician performance, handoffs, and patient safety, and has served as an expert witness in cases regarding patient safety. The authors report no other funding, financial relationships, or conflicts of interest.

Alarm fatigue is not a new issue for hospitals. In a commentary written over 3 decades ago, Kerr and Hayes described what they saw as an alarming issue developing in intensive care units.[1] Recently multiple organizations, including The Joint Commission and the Emergency Care Research Institute have called out alarm fatigue as a patient safety problem,[2, 3, 4] and organizations such as the American Academy of Pediatrics and the American Heart Association are backing away from recommendations for continuous monitoring.[5, 6] Hospitals are in a scramble to set up alarm committees and address alarms locally as recommended by The Joint Commission.[2] In this issue of the Journal of Hospital Medicine, Paine and colleagues set out to review the small but growing body of literature addressing physiologic monitor alarms and interventions that have tried to address alarm fatigue.[7]

After searching through 4629 titles, the authors found 32 articles addressing their key questions: What proportion of alarms are actionable? What is the relationship between clinicians' alarm exposure and response time? Which interventions are effective for reducing alarm rates? The majority of studies identified were observational, with only 8 studies addressing interventions to reduce alarms. Many of the identified studies occurred in units taking care of adults, though 10 descriptive studies and 1 intervention study occurred in pediatric settings. Perhaps the most concerning finding of all, though not surprising to those who work in the hospital setting, was that somewhere between <1% and 26% of alarms across all studies were considered actionable. Although only specifically addressed in 2 studies, the issue of alarm fatigue (i.e., more alarms leading to slower and sometimes absent clinician response) was supported in both, with nurses having slower responses when exposed to a higher numbers of alarms.[8, 9]

The authors note several limitations of their work, one of which is the modest body of literature on the topic. Although several interventions, including widening alarm parameters, increasing alarm delays, and using disposable leads or daily lead changes, have early evidence of success in safely reducing unnecessary alarms, the heterogeneity of this literature precluded a meta‐analysis. Further, the lack of standard definitions and the variety of methods of determining alarm validity make comparison across studies challenging. For this reason, the authors note that they did not distinguish nuisance alarms (i.e., alarms that accurately reflect the patient condition but do not require any intervention) from invalid alarms (i.e., alarms that do not correctly reflect the patient condition). This is relevant because it is likely that interventions to reduce invalid alarms (e.g., frequent lead changes) may be distinct from those that will successfully address nuisance alarms (e.g., widening alarm limits). It is also important to note that although patient safety is of paramount importance, there were other negative consequences of alarms that the authors did not address in this systemic review. Moreover, although avoiding unrecognized deterioration should be a primary goal of any program to reduce alarm fatigue, death remains uncommon compared to the number of patients, families, and healthcare workers exposed to high numbers of alarms during hospitalization. The high number of nonactionable alarms suggests that part of the burden of this problem may lie in more difficult to quantify outcomes such as sleep quality,[10, 11, 12] patient and parent quality of life during hospitalization,[13, 14] and interrupted tasks and cognitive work of healthcare providers.[15]

Paine and colleagues' review has some certain and some less certain implications for the future of alarm research. First, there is an imminent need for researchers and improvers to develop a consensus around terminology and metrics. We need to agree on what is and is not an actionable alarm, and we need valid and sensitive metrics to better understand the consequences of not monitoring a patient who should be on monitors. Second, hospitals addressing alarm fatigue need benchmarks. As hospitals rush to comply with The Joint Commission National Patient Safety Goals for alarm management,[2] it is safe to say that our goal should not be zero alarms, but how low do you go? What can we consider a safe number of alarms in our hospitals? Smart alarms hold tremendous potential to improve the sensitivity and positive predictive value of alarms. However, their ultimate success is dependent on engineers in industry to develop the technology as well as researchers in the hospital setting to validate the technology's performance in clinical care. Additionally, hospitals need to know which interventions are most effective to implement and how to reliably implement these in daily practice. What seems less certain is what type of research is best suited to address this need. The authors recommend randomized trials as an immediate next step, and certainly trials are the gold standard in determining efficacy. However, trials may overstate effectiveness as complex bundled interventions play out in complex and dynamic hospital systems. Quasiexperimental study designs, including time series and step‐wedge designs, would allow for further scientific discovery, such as which interventions are most effective in certain patient populations, while describing reliable implementation of effective methods that lead to lower alarms rates. In both classical randomized controlled trials and quasiexperiments, factorial designs[16, 17] could give us a better understanding of both the comparative effect and any interaction between interventions.

Alarm fatigue is a widespread problem that has negative effects for patients, families, nurses, and physicians. This review demonstrates that the great majority of alarms do not help clinicians and likely contribute to alarm fatigue. The opportunity to improve care is unquestionably vast, and attention from The Joint Commission and the lay press ensures change will occur. What is critical now is for hospitalists, intensivists, nurses, researchers, and hospital administrators to find the right combination of scientific discovery, thoughtful collaboration with industry, and quality improvement that will inform the literature on which interventions worked, how, and in what setting, and ultimately lead to safer (and quieter) hospitals.

Disclosures

Dr. Brady is supported by the Agency for Healthcare Research and Quality under award number K08HS023827. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality. Dr. Landrigan is supported in part by the Children's Hospital Association for his work as an executive council member of the Pediatric Research in Inpatient Settings network. Dr. Landrigan serves as a consultant to Virgin Pulse regarding sleep, safety, and health. In addition, Dr. Landrigan has received monetary awards, honoraria, and travel reimbursement from multiple academic and professional organizations for delivering lectures on sleep deprivation, physician performance, handoffs, and patient safety, and has served as an expert witness in cases regarding patient safety. The authors report no other funding, financial relationships, or conflicts of interest.

References
  1. Kerr JH, Hayes B. An “alarming” situation in the intensive therapy unit. Intensive Care Med. 1983;9(3):103104.
  2. The Joint Commission. National Patient Safety Goal on Alarm Management. Available at: http://www.jointcommission.org/assets/1/18/JCP0713_Announce_New_NSPG.pdf. Accessed October 23, 2015.
  3. Joint Commission. Medical device alarm safety in hospitals. Sentinel Event Alert. 2013;(50):13.
  4. Top 10 health technology hazards for 2014. Health Devices. 2013;42(11):354380.
  5. Ralston SL, Lieberthal AS, Meissner HC, et al. Clinical practice guideline: the diagnosis, management, and prevention of bronchiolitis. Pediatrics. 2014;134(5):e1474e1502.
  6. Drew BJ, Califf RM, Funk M, et al. Practice standards for electrocardiographic monitoring in hospital settings: an American Heart Association scientific statement from the Councils on Cardiovascular Nursing, Clinical Cardiology, and Cardiovascular Disease in the Young: endorsed by the International Society of Computerized Electrocardiology and the American Association of Critical‐Care Nurses. Circulation. 2004;110(17):27212746.
  7. Paine CW, Goel VV, Ely E, Stave CD, Stemler S, Zander M, Bonafide CP. Systematic review of physiologic monitor alarm characteristics and pragmatic interventions to reduce alarm frequency. J Hosp Med. 2016;11(2):136144.
  8. Voepel‐Lewis T, Parker ML, Burke CN, et al. Pulse oximetry desaturation alarms on a general postoperative adult unit: a prospective observational study of nurse response time. Int J Nurs Stud. 2013;50(10):13511358.
  9. Bonafide CP, Lin R, Zander M, et al. Association between exposure to nonactionable physiologic monitor alarms and response time in a children's hospital. J Hosp Med. 2015;10(6):345351.
  10. McCann D. Sleep deprivation is an additional stress for parents staying in hospital. J Spec Pediatr Nurs. 2008;13(2):111122.
  11. Yamanaka H, Haruna J, Mashimo T, Akita T, Kinouchi K. The sound intensity and characteristics of variable‐pitch pulse oximeters. J Clin Monit Comput. 2008;22(3):199207.
  12. Stremler R, Dhukai Z, Wong L, Parshuram C. Factors influencing sleep for parents of critically ill hospitalised children: a qualitative analysis. Intensive Crit Care Nurs. 2011;27(1):3745.
  13. Miles MS, Burchinal P, Holditch‐Davis D, Brunssen S, Wilson SM. Perceptions of stress, worry, and support in Black and White mothers of hospitalized, medically fragile infants. J Pediatr Nurs. 2002;17(2):8288.
  14. Busse M, Stromgren K, Thorngate L, Thomas KA. Parents' responses to stress in the neonatal intensive care unit. Crit Care Nurs. 2013;33(4):5259; quiz 60.
  15. Deb S, Claudio D. Alarm fatigue and its influence on staff performance. IIE Trans Healthc Syst Eng. 2015;5(3):183196.
  16. Moen RD, Nolan TW, Provost LP. Quality Improvement Through Planned Experimentation. 3rd ed. New York, NY: McGraw‐Hill; 1991.
  17. Provost LP, Murray SK. The Health Care Data Guide: Learning From Data for Improvement. San Francisco, CA: Jossey‐Bass; 2011.
References
  1. Kerr JH, Hayes B. An “alarming” situation in the intensive therapy unit. Intensive Care Med. 1983;9(3):103104.
  2. The Joint Commission. National Patient Safety Goal on Alarm Management. Available at: http://www.jointcommission.org/assets/1/18/JCP0713_Announce_New_NSPG.pdf. Accessed October 23, 2015.
  3. Joint Commission. Medical device alarm safety in hospitals. Sentinel Event Alert. 2013;(50):13.
  4. Top 10 health technology hazards for 2014. Health Devices. 2013;42(11):354380.
  5. Ralston SL, Lieberthal AS, Meissner HC, et al. Clinical practice guideline: the diagnosis, management, and prevention of bronchiolitis. Pediatrics. 2014;134(5):e1474e1502.
  6. Drew BJ, Califf RM, Funk M, et al. Practice standards for electrocardiographic monitoring in hospital settings: an American Heart Association scientific statement from the Councils on Cardiovascular Nursing, Clinical Cardiology, and Cardiovascular Disease in the Young: endorsed by the International Society of Computerized Electrocardiology and the American Association of Critical‐Care Nurses. Circulation. 2004;110(17):27212746.
  7. Paine CW, Goel VV, Ely E, Stave CD, Stemler S, Zander M, Bonafide CP. Systematic review of physiologic monitor alarm characteristics and pragmatic interventions to reduce alarm frequency. J Hosp Med. 2016;11(2):136144.
  8. Voepel‐Lewis T, Parker ML, Burke CN, et al. Pulse oximetry desaturation alarms on a general postoperative adult unit: a prospective observational study of nurse response time. Int J Nurs Stud. 2013;50(10):13511358.
  9. Bonafide CP, Lin R, Zander M, et al. Association between exposure to nonactionable physiologic monitor alarms and response time in a children's hospital. J Hosp Med. 2015;10(6):345351.
  10. McCann D. Sleep deprivation is an additional stress for parents staying in hospital. J Spec Pediatr Nurs. 2008;13(2):111122.
  11. Yamanaka H, Haruna J, Mashimo T, Akita T, Kinouchi K. The sound intensity and characteristics of variable‐pitch pulse oximeters. J Clin Monit Comput. 2008;22(3):199207.
  12. Stremler R, Dhukai Z, Wong L, Parshuram C. Factors influencing sleep for parents of critically ill hospitalised children: a qualitative analysis. Intensive Crit Care Nurs. 2011;27(1):3745.
  13. Miles MS, Burchinal P, Holditch‐Davis D, Brunssen S, Wilson SM. Perceptions of stress, worry, and support in Black and White mothers of hospitalized, medically fragile infants. J Pediatr Nurs. 2002;17(2):8288.
  14. Busse M, Stromgren K, Thorngate L, Thomas KA. Parents' responses to stress in the neonatal intensive care unit. Crit Care Nurs. 2013;33(4):5259; quiz 60.
  15. Deb S, Claudio D. Alarm fatigue and its influence on staff performance. IIE Trans Healthc Syst Eng. 2015;5(3):183196.
  16. Moen RD, Nolan TW, Provost LP. Quality Improvement Through Planned Experimentation. 3rd ed. New York, NY: McGraw‐Hill; 1991.
  17. Provost LP, Murray SK. The Health Care Data Guide: Learning From Data for Improvement. San Francisco, CA: Jossey‐Bass; 2011.
Issue
Journal of Hospital Medicine - 11(2)
Issue
Journal of Hospital Medicine - 11(2)
Page Number
153-154
Page Number
153-154
Publications
Publications
Article Type
Display Headline
Alarm fatigue: Clearing the air
Display Headline
Alarm fatigue: Clearing the air
Sections
Article Source
© 2015 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Amanda C. Schondelmeyer, MD, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave. ML 9016, Cincinnati, OH, 45229; Telephone: 513‐803‐9158; Fax: 513‐803‐9224; E‐mail: amanda.schondelmeyer@cchmc.org
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media

Variation in Printed Handoff Documents

Article Type
Changed
Tue, 05/16/2017 - 23:11
Display Headline
Variation in printed handoff documents: Results and recommendations from a multicenter needs assessment

Handoffs among hospital providers are highly error prone and can result in serious morbidity and mortality. Best practices for verbal handoffs have been described[1, 2, 3, 4] and include conducting verbal handoffs face to face, providing opportunities for questions, having the receiver perform a readback, as well as specific content recommendations including action items. Far less research has focused on best practices for printed handoff documents,[5, 6] despite the routine use of written handoff tools as a reference by on‐call physicians.[7, 8] Erroneous or outdated information on the written handoff can mislead on‐call providers, potentially leading to serious medical errors.

In their most basic form, printed handoff documents list patients for whom a provider is responsible. Typically, they also contain demographic information, reason for hospital admission, and a task list for each patient. They may also contain more detailed information on patient history, hospital course, and/or care plan, and may vary among specialties.[9] They come in various forms, ranging from index cards with handwritten notes, to word‐processor or spreadsheet documents, to printed documents that are autopopulated from the electronic health record (EHR).[2] Importantly, printed handoff documents supplement the verbal handoff by allowing receivers to follow along as patients are presented. The concurrent use of written and verbal handoffs may improve retention of clinical information as compared with either alone.[10, 11]

The Joint Commission requires an institutional approach to patient handoffs.[12] The requirements state that handoff communication solutions should take a standardized form, but they do not provide details regarding what data elements should be included in printed or verbal handoffs. Accreditation Council for Graduate Medical Education Common Program Requirements likewise require that residents must become competent in patient handoffs[13] but do not provide specific details or measurement tools. Absent widely accepted guidelines, decisions regarding which elements to include in printed handoff documents are currently made at an individual or institutional level.

The I‐PASS study is a federally funded multi‐institutional project that demonstrated a decrease in medical errors and preventable adverse events after implementation of a standardized resident handoff bundle.[14, 15] The I‐PASS Study Group developed a bundle of handoff interventions, beginning with a handoff and teamwork training program (based in part on TeamSTEPPS [Team Strategies and Tools to Enhance Performance and Patient Safety]),[16] a novel verbal mnemonic, I‐PASS (Illness Severity, Patient Summary, Action List, Situation Awareness and Contingency Planning, and Synthesis by Receiver),[17] and changes to the verbal handoff process, in addition to several other elements.

We hypothesized that developing a standardized printed handoff template would reinforce the handoff training and enhance the value of the verbal handoff process changes. Given the paucity of data on best printed handoff practices, however, we first conducted a needs assessment to identify which data elements were currently contained in printed handoffs across sites, and to allow an expert panel to make recommendations for best practices.

METHODS

I‐PASS Study sites included 9 pediatric residency programs at academic medical centers from across North America. Programs were identified through professional networks and invited to participate. The nonintensive care unit hospitalist services at these medical centers are primarily staffed by residents and medical students with attending supervision. At 1 site, nurse practitioners also participate in care. Additional details about study sites can be found in the study descriptions previously published.[14, 15] All sites received local institutional review board approval.

We began by inviting members of the I‐PASS Education Executive Committee (EEC)[14] to build a collective, comprehensive list of possible data elements for printed handoff documents. This committee included pediatric residency program directors, pediatric hospitalists, education researchers, health services researchers, and patient safety experts. We obtained sample handoff documents from pediatric hospitalist services at each of 9 institutions in the United States and Canada (with protected health information redacted). We reviewed these sample handoff documents to characterize their format and to determine what discrete data elements appeared in each site's printed handoff document. Presence or absence of each data element across sites was tabulated. We also queried sites to determine the feasibility of including elements that were not presently included.

Subsequently, I‐PASS site investigators led structured group interviews at participating sites to gather additional information about handoff practices at each site. These structured group interviews included diverse representation from residents, faculty, and residency program leadership, as well as hospitalists and medical students, to ensure the comprehensive acquisition of information regarding site‐specific characteristics. Each group provided answers to a standardized set of open‐ended questions that addressed current practices, handoff education, simulation use, team structure, and the nature of current written handoff tools, if applicable, at each site. One member of the structured group interview served as a scribe and created a document that summarized the content of the structured group interview meeting and answers to the standardized questions.

Consensus on Content

The initial data collection also included a multivote process[18] of the full I‐PASS EEC to help prioritize data elements. Committee members brainstormed a list of all possible data elements for a printed handoff document. Each member (n=14) was given 10 votes to distribute among the elements. Committee members could assign more than 1 vote to an element to emphasize its importance.

The results of this process as well as the current data elements included in each printed handoff tool were reviewed by a subgroup of the I‐PASS EEC. These expert panel members participated in a series of conference calls during which they tabulated categorical information, reviewed narrative comments, discussed existing evidence, and conducted simple content analysis to identify areas of concordance or discordance. Areas of discordance were discussed by the committee. Disagreements were resolved with group consensus with attention to published evidence or best practices, if available.

Elements were divided into those that were essential (unanimous consensus, no conflicting literature) and those that were recommended (majority supported inclusion of element, no conflicting literature). Ratings were assigned using the American College of Cardiology/American Heart Association framework for practice guidelines,[19] in which each element is assigned a classification (I=effective, II=conflicting evidence/opinion, III=not effective) and a level of evidence to support that classification (A=multiple large randomized controlled trials, B=single randomized trial, or nonrandomized studies, C=expert consensus).

The expert panel reached consensus, through active discussion, on a list of data elements that should be included in an ideal printed handoff document. Elements were chosen based on perceived importance, with attention to published best practices[1, 16] and the multivoting results. In making recommendations, consideration was given to whether data elements could be electronically imported into the printed handoff document from the EHR, or whether they would be entered manually. The potential for serious medical errors due to possible errors in manual entry of data was an important aspect of recommendations made. The list of candidate elements was then reviewed by a larger group of investigators from the I‐PASS Education Executive Committee and Coordinating Council for additional input.

The panel asked site investigators from each participating hospital to gather data on the feasibility of redesigning the printed handoff at that hospital to include each recommended element. Site investigators reported whether each element was already included, possible to include but not included currently, or not currently possible to include within that site's printed handoff tool. Site investigators also reported how data elements were populated in their handoff documents, with options including: (1) autopopulated from administrative data (eg, pharmacy‐entered medication list, demographic data entered by admitting office), (2) autoimported from physicians' free‐text entries elsewhere in the EHR (eg, progress notes), (3) free text entered specifically for the printed handoff, or (4) not applicable (element cannot be included).

RESULTS

Nine programs (100%) provided data on the structure and contents of their printed handoff documents. We found wide variation in structure across the 9 sites. Three sites used a word‐processorbased document that required manual entry of all data elements. The other 6 institutions had a direct link with the EHR to enable autopopulation of between 10 and 20 elements on the printed handoff document.

The content of written handoff documents, as well as the sources of data included in them (present or future), likewise varied substantially across sites (Table 1). Only 4 data elements (name, age, weight, and a list of medications) were universally included at all 9 sites. Among the 6 institutions that linked the printed handoff to the EHR, there was also substantial variation in which elements were autoimported. Only 7 elements were universally autoimported at these 6 sites: patient name, medical record number, room number, weight, date of birth, age, and date of admission. Two elements from the original brainstorming were not presently included in any sites' documents (emergency contact and primary language).

Results of Initial Needs Assessment, With Current and Potential Future Inclusion of Data Elements in Printed Handoff Documents at Nine Study Sites
Data ElementsSites With Data Element Included at Initial Needs Assessment (Out of Nine Sites)Data Source (Current or Anticipated)
Autoimported*Manually EnteredNot Applicable
  • NOTE: *Includes administrative data and free text entered into other electronic health record fields. Manually entered directly into printed handoff document. Data field could not be included due to institutional limitations.

Name9630
Medical record number8630
Room number8630
Allergies6450
Weight9630
Age9630
Date of birth6630
Admission date8630
Attending name5450
Team/service7450
Illness severity1090
Patient summary8090
Action items8090
Situation monitoring/contingency plan5090
Medication name9450
Medication name and dose/route/frequency4450
Code status2270
Labs6540
Access2270
Ins/outs2441
Primary language0360
Vital signs3441
Emergency contact0270
Primary care provider4450

Nine institutions (100%) conducted structured group interviews, ranging in size from 4 to 27 individuals with a median of 5 participants. The documents containing information from each site were provided to the authors. The authors then tabulated categorical information, reviewed narrative comments to understand current institutional practices, and conducted simple content analysis to identify areas of concordance or discordance, particularly with respect to data elements and EHR usage. Based on the results of the printed handoff document review and structured group interviews, with additional perspectives provided by the I‐PASS EEC, the expert panel came to consensus on a list of 23 elements that should be included in printed handoff documents, including 15 essential data elements and 8 additional recommended elements (Table 2).

Rating of Essential and Recommended Data Elements for Printed Handoff Template*
  • NOTE: Abbreviations: I‐PASS, illness severity, patient summary, action list, situation awareness and contingency planning, and synthesis by receiver. *Utilizing American College of Cardiology Foundation and American Heart Association framework for practice guidelines: classification (I=effective, IIa=conflicting evidence/opinion but weight is in favor of usefulness/efficacy, IIb=usefulness/efficacy less well established by evidence/opinion, III=not effective) and level of evidence to support classification (A=multiple large randomized controlled trials, B=single randomized trial or nonrandomized studies, C=expert consensus). Preferably using the I‐PASS categorization of stable/watcher/unstable, but other categorization okay. Refers to common or patient‐specific labs.

Essential Elements
Patient identifiers
Patient name (class I, level of evidence C)
Medical record number (class I, level of evidence C)
Date of birth (class I, level of evidence C)
Hospital service identifiers
Attending name (class I, level of evidence C)
Team/service (class I, level of evidence C)
Room number (class I, level of evidence C)
Admission date (class I, level of evidence C)
Age (class I, level of evidence C)
Weight (class I, level of evidence C)
Illness severity (class I, level of evidence B)[20, 21]
Patient summary (class I, level of evidence B)[21, 22]
Action items (class I, level of evidence B) [21, 22]
Situation awareness/contingency planning (class I, level of evidence B) [21, 22]
Allergies (class I, level of evidence C)
Medications
Autopopulation of medications (class I, level of evidence B)[22, 23, 24]
Free‐text entry of medications (class IIa, level of evidence C)
Recommended elements
Primary language (class IIa, level of evidence C)
Emergency contact (class IIa, level of evidence C)
Primary care provider (class IIa, level of evidence C)
Code status (class IIb, level of evidence C)
Labs (class IIa, level of evidence C)
Access (class IIa, level of evidence C)
Ins/outs (class IIa, level of evidence C)
Vital signs (class IIa, level of evidence C)

Evidence ratings[19] of these elements are included. Several elements are classified as I‐B (effective, nonrandomized studies) based on either studies of individual elements, or greater than 1 study of bundled elements that could reasonably be extrapolated. These include Illness severity,[20, 21] patient summary,[21, 22] action items[21, 22] (to do lists), situation awareness and contingency plan,[21, 22] and medications[22, 23, 24] with attention to importing from the EHR. Medications entered as free text were classified as IIa‐C because of risk and potential significance of errors; in particular there was concern that transcription errors, errors of omission, or errors of commission could potentially lead to patient harms. The remaining essential elements are classified as I‐C (effective, expert consensus). Of note, date of birth was specifically included as a patient identifier, distinct from age, which was felt to be useful as a descriptor (often within a one‐liner or as part of the patient summary).

The 8 recommended elements were elements for which there was not unanimous agreement on inclusion, but the majority of the panel felt they should be included. These elements were classified as IIa‐C, with 1 exception. Code status generated significant controversy among the group. After extensive discussion among the group and consideration of safety, supervision, educational, and pediatric‐specific considerations, all members of the group agreed on the categorization as a recommended element; it is classified as IIb‐C.

All members of the group agreed that data elements should be directly imported from the EHR whenever possible. Finally, members agreed that the elements that make up the I‐PASS mnemonic (illness severity, patient summary, action items, situation awareness/contingency planning) should be listed in that order whenever possible. A sample I‐PASS‐compliant printed handoff document is shown Figure 1.

Figure 1
Sample screenshot of an I‐PASS–compliant handoff report. Abbreviations: I‐PASS, illness severity, patient summary, action list, situation awareness and contingency planning, and synthesis by receiver.

DISCUSSION

We identified substantial variability in the structure and content of printed handoff documents used by 9 pediatric hospitalist teaching services, reflective of a lack of standardization. We found that institutional printed handoff documents shared some demographic elements (eg, name, room, medical record number) but also varied in clinical content (eg, vital signs, lab tests, code status). Our expert panel developed a list of 15 essential and 8 recommended data elements for printed handoff documents. Although this is a large number of fields, the majority of the essential fields were already included by most sites, and many are basic demographic identifiers. Illness severity is the 1 essential field that was not routinely included; however, including this type of overview is consistently recommended[2, 4] and supported by evidence,[20, 21] and contributes to building a shared mental model.[16] We recommend the categories of stable/watcher/unstable.[17]

Several prior single‐center studies have found that introducing a printed handoff document can lead to improvements in workflow, communication, and patient safety. In an early study, Petersen et al.[25] showed an association between use of a computerized sign‐out program and reduced odds of preventable adverse events during periods of cross‐coverage. Wayne et al.[26] reported fewer perceived inaccuracies in handoff documents as well as improved clarity at the time of transfer, supporting the role for standardization. Van Eaton et al.[27] demonstrated rapid uptake and desirability of a computerized handoff document, which combined autoimportation of information from an EHR with resident‐entered patient details, reflecting the importance of both data sources. In addition, they demonstrated improvements in both the rounding and sign‐out processes.[28]

Two studies specifically reported the increased use of specific fields after implementation. Payne et al. implemented a Web‐based handoff tool and documented significant increases in the number of handoffs containing problem lists, medication lists, and code status, accompanied by perceived improvements in quality of handoffs and fewer near‐miss events.[24] Starmer et al. found that introduction of a resident handoff bundle that included a printed handoff tool led to reduction in medical errors and adverse events.[22] The study group using the tool populated 11 data elements more often after implementation, and introduction of this printed handoff tool in particular was associated with reductions in written handoff miscommunications. Neither of these studies included subanalysis to indicate which data elements may have been most important.

In contrast to previous single‐institution studies, our recommendations for a printed handoff template come from evaluations of tools and discussions with front line providers across 9 institutions. We had substantial overlap with data elements recommended by Van Eaton et al.[27] However, there were several areas in which we did not have overlap with published templates including weight, ins/outs, primary language, emergency contact information, or primary care provider. Other published handoff tools have been highly specialized (eg, for cardiac intensive care) or included many fewer data elements than our group felt were essential. These differences may reflect the unique aspects of caring for pediatric patients (eg, need for weights) and the absence of defined protocols for many pediatric conditions. In addition, the level of detail needed for contingency planning may vary between teaching and nonteaching services.

Resident physicians may provide valuable information in the development of standardized handoff documents. Clark et al.,[29] at Virginia Mason University, utilized resident‐driven continuous quality improvement processes including real‐time feedback to implement an electronic template. They found that engagement of both senior leaders and front‐line users was an important component of their success in uptake. Our study utilized residents as essential members of structured group interviews to ensure that front‐line users' needs were represented as recommendations for a printed handoff tool template were developed.

As previously described,[17] our study group had identified several key data elements that should be included in verbal handoffs: illness severity, a patient summary, a discrete action list, situation awareness/contingency planning, and a synthesis by receiver. With consideration of the multivoting results as well as known best practices,[1, 4, 12] the expert panel for this study agreed that each of these elements should also be highlighted in the printed template to ensure consistency between the printed document and the verbal handoff, and to have each reinforce the other. On the printed handoff tool, the final S in the I‐PASS mnemonic (synthesis by receiver) cannot be prepopulated, but considering the importance of this step,[16, 30, 31, 32] it should be printed as synthesis by receiver to serve as a text‐reminder to both givers and receivers.

The panel also felt, however, that the printed handoff document should provide additional background information not routinely included in a verbal handoff. It should serve as a reference tool both at the time of verbal handoff and throughout the day and night, and therefore should include more comprehensive information than is necessary or appropriate to convey during the verbal handoff. We identified 10 data elements that are essential in a printed handoff document in addition to the I‐PASS elements (Table 2).

Patient demographic data elements, as well as team assignments and attending physician, were uniformly supported for inclusion. The medication list was viewed as essential; however, the panel also recognized the potential for medical errors due to inaccuracies in the medication list. In particular, there was concern that including all fields of a medication order (drug, dose, route, frequency) would result in handoffs containing a high proportion of inaccurate information, particularly for complex patients whose medication regimens may vary over the course of hospitalization. Therefore, the panel agreed that if medication lists were entered manually, then only the medication name should be included as they did not wish to perpetuate inaccurate or potentially harmful information. If medication lists were autoimported from an EHR, then they should include drug name, dose, route, and frequency if possible.

In the I‐PASS study,[15] all institutions implemented printed handoff documents that included fields for the essential data elements. After implementation, there was a significant increase in completion of all essential fields. Although there is limited evidence to support any individual data element, increased usage of these elements was associated with the overall study finding of decreased rates of medical errors and preventable adverse events.

EHRs have the potential to help standardize printed handoff documents[5, 6, 33, 34, 35]; all participants in our study agreed that printed handoff documents should ideally be linked with the EHR and should autoimport data wherever appropriate. Manually populated (eg, word processor‐ or spreadsheet‐based) handoff tools have important limitations, particularly related to the potential for typographical errors as well as accidental omission of data fields, and lead to unnecessary duplication of work (eg, re‐entering data already included in a progress note) that can waste providers' time. It was also acknowledged that word processor‐ or spreadsheet‐based documents may have flexibility that is lacking in EHR‐based handoff documents. For example, formatting can more easily be adjusted to increase the number of patients per printed page. As technology advances, printed documents may be phased out in favor of EHR‐based on‐screen reports, which by their nature would be more accurate due to real‐time autoupdates.

In making recommendations about essential versus recommended items for inclusion in the printed handoff template, the only data element that generated controversy among our experts was code status. Some felt that it should be included as an essential element, whereas others did not. We believe that this was unique to our practice in pediatric hospital ward settings, as codes in most pediatric ward settings are rare. Among the concerns expressed with including code status for all patients were that residents might assume patients were full‐code without verifying. The potential inaccuracy created by this might have severe implications. Alternatively, residents might feel obligated to have code discussions with all patients regardless of severity of illness, which may be inappropriate in a pediatric population. Several educators expressed concerns about trainees having unsupervised code‐status conversations with families of pediatric patients. Conversely, although codes are rare in pediatric ward settings, concerns were raised that not including code status could be problematic during these rare but critically important events. Other fields, such as weight, might have less relevance for an adult population in which emergency drug doses are standardized.

Limitations

Our study has several limitations. We only collected data from hospitalist services at pediatric sites. It is likely that providers in other specialties would have specific data elements they felt were essential (eg, postoperative day, code status). Our methodology was expert consensus based, driven by data collection from sites that were already participating in the I‐PASS study. Although the I‐PASS study demonstrated decreased rates of medical errors and preventable adverse events with inclusion of these data elements as part of a bundle, future research will be required to evaluate whether some of these items are more important than others in improving written communication and ultimately patient safety. In spite of these limitations, our work represents an important starting point for the development of standards for written handoff documents that should be used in patient handoffs, particularly those generated from EHRs.

CONCLUSIONS

In this article we describe the results of a needs assessment that informed expert consensus‐based recommendations for data elements to include in a printed handoff document. We recommend that pediatric programs include the elements identified as part of a standardized written handoff tool. Although many of these elements are also applicable to other specialties, future work should be conducted to adapt the printed handoff document elements described here for use in other specialties and settings. Future studies should work to validate the importance of these elements, studying the manner in which their inclusion affects the quality of written handoffs, and ultimately patient safety.

Acknowledgements

Members of the I‐PASS Study Education Executive Committee who contributed to this manuscript include: Boston Children's Hospital/Harvard Medical School (primary site) (Christopher P. Landrigan, MD, MPH, Elizabeth L. Noble, BA. Theodore C. Sectish, MD. Lisa L. Tse, BA). Cincinnati Children's Hospital Medical Center/University of Cincinnati College of Medicine (Jennifer K. O'Toole, MD, MEd). Doernbecher Children's Hospital/Oregon Health and Science University (Amy J. Starmer, MD, MPH). Hospital for Sick Children/University of Toronto (Zia Bismilla, MD. Maitreya Coffey, MD). Lucile Packard Children's Hospital/Stanford University (Lauren A. Destino, MD. Jennifer L. Everhart, MD. Shilpa J. Patel, MD [currently at Kapi'olani Children's Hospital/University of Hawai'i School of Medicine]). National Capital Consortium (Jennifer H. Hepps, MD. Joseph O. Lopreiato, MD, MPH. Clifton E. Yu, MD). Primary Children's Medical Center/University of Utah (James F. Bale, Jr., MD. Adam T. Stevenson, MD). St. Louis Children's Hospital/Washington University (F. Sessions Cole, MD). St. Christopher's Hospital for Children/Drexel University College of Medicine (Sharon Calaman, MD. Nancy D. Spector, MD). Benioff Children's Hospital/University of California San Francisco School of Medicine (Glenn Rosenbluth, MD. Daniel C. West, MD).

Additional I‐PASS Study Group members who contributed to this manuscript include April D. Allen, MPA, MA (Heller School for Social Policy and Management, Brandeis University, previously affiliated with Boston Children's Hospital), Madelyn D. Kahana, MD (The Children's Hospital at Montefiore/Albert Einstein College of Medicine, previously affiliated with Lucile Packard Children's Hospital/Stanford University), Robert S. McGregor, MD (Akron Children's Hospital/Northeast Ohio Medical University, previously affiliated with St. Christopher's Hospital for Children/Drexel University), and John S. Webster, MD, MBA, MS (Webster Healthcare Consulting Inc., formerly of the Department of Defense).

Members of the I‐PASS Study Group include individuals from the institutions listed below as follows: Boston Children's Hospital/Harvard Medical School (primary site): April D. Allen, MPA, MA (currently at Heller School for Social Policy and Management, Brandeis University), Angela M. Feraco, MD, Christopher P. Landrigan, MD, MPH, Elizabeth L. Noble, BA, Theodore C. Sectish, MD, Lisa L. Tse, BA. Brigham and Women's Hospital (data coordinating center): Anuj K. Dalal, MD, Carol A. Keohane, BSN, RN, Stuart Lipsitz, PhD, Jeffrey M. Rothschild, MD, MPH, Matt F. Wien, BS, Catherine S. Yoon, MS, Katherine R. Zigmont, BSN, RN. Cincinnati Children's Hospital Medical Center/University of Cincinnati College of Medicine: Javier Gonzalez del Rey, MD, MEd, Jennifer K. O'Toole, MD, MEd, Lauren G. Solan, MD. Doernbecher Children's Hospital/Oregon Health and Science University: Megan E. Aylor, MD, Amy J. Starmer, MD, MPH, Windy Stevenson, MD, Tamara Wagner, MD. Hospital for Sick Children/University of Toronto: Zia Bismilla, MD, Maitreya Coffey, MD, Sanjay Mahant, MD, MSc. Lucile Packard Children's Hospital/Stanford University: Rebecca L. Blankenburg, MD, MPH, Lauren A. Destino, MD, Jennifer L. Everhart, MD, Madelyn Kahana, MD, Shilpa J. Patel, MD (currently at Kapi'olani Children's Hospital/University of Hawaii School of Medicine). National Capital Consortium: Jennifer H. Hepps, MD, Joseph O. Lopreiato, MD, MPH, Clifton E. Yu, MD. Primary Children's Hospital/University of Utah: James F. Bale, Jr., MD, Jaime Blank Spackman, MSHS, CCRP, Rajendu Srivastava, MD, FRCP(C), MPH, Adam Stevenson, MD. St. Louis Children's Hospital/Washington University: Kevin Barton, MD, Kathleen Berchelmann, MD, F. Sessions Cole, MD, Christine Hrach, MD, Kyle S. Schultz, MD, Michael P. Turmelle, MD, Andrew J. White, MD. St. Christopher's Hospital for Children/Drexel University: Sharon Calaman, MD, Bronwyn D. Carlson, MD, Robert S. McGregor, MD (currently at Akron Children's Hospital/Northeast Ohio Medical University), Vahideh Nilforoshan, MD, Nancy D. Spector, MD. and Benioff Children's Hospital/University of California San Francisco School of Medicine: Glenn Rosenbluth, MD, Daniel C. West, MD. Dorene Balmer, PhD, RD, Carol L. Carraccio, MD, MA, Laura Degnon, CAE, and David McDonald, and Alan Schwartz PhD serve the I‐PASS Study Group as part of the IIPE. Karen M. Wilson, MD, MPH serves the I‐PASS Study Group as part of the advisory board from the PRIS Executive Council. John Webster served the I‐PASS Study Group and Education Executive Committee as a representative from TeamSTEPPS.

Disclosures: The I‐PASS Study was primarily supported by the US Department of Health and Human Services, Office of the Assistant Secretary for Planning and Evaluation (1R18AE000029‐01). The opinions and conclusions expressed herein are solely those of the author(s) and should not be constructed as representing the opinions or policy of any agency of the federal government. Developed with input from the Initiative for Innovation in Pediatric Education and the Pediatric Research in Inpatient Settings Network (supported by the Children's Hospital Association, the Academic Pediatric Association, the American Academy of Pediatrics, and the Society of Hospital Medicine). A. J. S. was supported by the Agency for Healthcare Research and Quality/Oregon Comparative Effectiveness Research K12 Program (1K12HS019456‐01). Additional funding for the I‐PASS Study was provided by the Medical Research Foundation of Oregon, Physician Services Incorporated Foundation (Ontario, Canada), and Pfizer (unrestricted medical education grant to N.D.S.). C.P.L, A.J.S. were supported by the Oregon Comparative Effectiveness Research K12 Program (1K12HS019456 from the Agency for Healthcare Research and Quality). A.J.S. was also supported by the Medical Research Foundation of Oregon. The authors report no conflicts of interest.

Files
References
  1. Patterson ES, Roth EM, Woods DD, Chow R, Gomes JO. Handoff strategies in settings with high consequences for failure: lessons for health care operations. Int J Qual Health Care. 2004;16(2):125132.
  2. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1(4):257266.
  3. Horwitz LI, Moin T, Green ML. Development and implementation of an oral sign‐out skills curriculum. J Gen Intern Med. 2007;22(10):14701474.
  4. Arora VM, Manjarrez E, Dressler DD, Basaviah P, Halasyamani L, Kripalani S. Hospitalist handoffs: a systematic review and task force recommendations. J Hosp Med. 2009;4(7):433440.
  5. Abraham J, Kannampallil T, Patel VL. A systematic review of the literature on the evaluation of handoff tools: implications for research and practice. J Am Med Inform Assoc. 2014;21(1):154162.
  6. Li P, Ali S, Tang C, Ghali WA, Stelfox HT. Review of computerized physician handoff tools for improving the quality of patient care. J Hosp Med. 2013;8(8):456463.
  7. McSweeney ME, Landrigan CP, Jiang H, Starmer A, Lightdale JR. Answering questions on call: pediatric resident physicians' use of handoffs and other resources. J Hosp Med. 2013;8(6):328333.
  8. Fogerty RL, Schoenfeld A, Salim Al‐Damluji M, Horwitz LI. Effectiveness of written hospitalist sign‐outs in answering overnight inquiries. J Hosp Med. 2013;8(11):609614.
  9. Schoenfeld AR, Salim Al‐Damluji M, Horwitz LI. Sign‐out snapshot: cross‐sectional evaluation of written sign‐outs among specialties. BMJ Qual Saf. 2014;23(1):6672.
  10. Bhabra G, Mackeith S, Monteiro P, Pothier DD. An experimental comparison of handover methods. Ann R Coll Surg Engl. 2007;89(3):298300.
  11. Pothier D, Monteiro P, Mooktiar M, Shaw A. Pilot study to show the loss of important data in nursing handover. Br J Nurs. 2005;14(20):10901093.
  12. The Joint Commission. Hospital Accreditation Standards 2015: Joint Commission Resources; 2015:PC.02.02.01.
  13. Accreditation Council for Graduate Medical Education. Common Program Requirements. 2013; http://acgme.org/acgmeweb/tabid/429/ProgramandInstitutionalAccreditation/CommonProgramRequirements.aspx. Accessed May 11, 2015.
  14. Sectish TC, Starmer AJ, Landrigan CP, Spector ND. Establishing a multisite education and research project requires leadership, expertise, collaboration, and an important aim. Pediatrics. 2010;126(4):619622.
  15. Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a handoff program. N Engl J Med. 2014;371(19):18031812.
  16. US Department of Health and Human Services. Agency for Healthcare Research and Quality. TeamSTEPPS website. Available at: http://teamstepps.ahrq.gov/. Accessed July 12, 2013.
  17. Starmer AJ, Spector ND, Srivastava R, Allen AD, Landrigan CP, Sectish TC. I‐PASS, a mnemonic to standardize verbal handoffs. Pediatrics. 2012;129(2):201204.
  18. Scholtes P, Joiner B, Streibel B. The Team Handbook. 3rd ed. Middleton, WI: Oriel STAT A MATRIX; 2010.
  19. ACC/AHA Task Force on Practice Guidelines. Methodology Manual and Policies From the ACCF/AHA Task Force on Practice Guidelines. Available at: http://my.americanheart.org/idc/groups/ahamah‐public/@wcm/@sop/documents/downloadable/ucm_319826.pdf. Published June 2010. Accessed January 11, 2015.
  20. Naessens JM, Campbell CR, Shah N, et al. Effect of illness severity and comorbidity on patient safety and adverse events. Am J Med Qual. 2012;27(1):4857.
  21. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168(16):17551760.
  22. Starmer AJ, Sectish TC, Simon DW, et al. Rates of medical errors and preventable adverse events among hospitalized children following implementation of a resident handoff bundle. JAMA. 2013;310(21):22622270.
  23. Arora V, Kao J, Lovinger D, Seiden SC, Meltzer D. Medication discrepancies in resident sign‐outs and their potential to harm. J Gen Intern Med. 2007;22(12):17511755.
  24. Payne CE, Stein JM, Leong T, Dressler DD. Avoiding handover fumbles: a controlled trial of a structured handover tool versus traditional handover methods. BMJ Qual Saf. 2012;21(11):925932.
  25. Petersen LA, Orav EJ, Teich JM, O'Neil AC, Brennan TA. Using a computerized sign‐out program to improve continuity of inpatient care and prevent adverse events. Jt Comm J Qual Improv. 1998;24(2):7787.
  26. Wayne JD, Tyagi R, Reinhardt G, et al. Simple standardized patient handoff system that increases accuracy and completeness. J Surg Educ. 2008;65(6):476485.
  27. Eaton EG, Horvath KD, Lober WB, Pellegrini CA. Organizing the transfer of patient care information: the development of a computerized resident sign‐out system. Surgery. 2004;136(1):513.
  28. Eaton EG, Horvath KD, Lober WB, Rossini AJ, Pellegrini CA. A randomized, controlled trial evaluating the impact of a computerized rounding and sign‐out system on continuity of care and resident work hours. J Am Coll Surg. 2005;200(4):538545.
  29. Clark CJ, Sindell SL, Koehler RP. Template for success: using a resident‐designed sign‐out template in the handover of patient care. J Surg Educ. 2011;68(1):5257.
  30. Boyd M, Cumin D, Lombard B, Torrie J, Civil N, Weller J. Read‐back improves information transfer in simulated clinical crises. BMJ Qual Saf. 2014;23(12):989993.
  31. Chang VY, Arora VM, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125(3):491496.
  32. Barenfanger J, Sautter RL, Lang DL, Collins SM, Hacek DM, Peterson LR. Improving patient safety by repeating (read‐back) telephone reports of critical information. Am J Clin Pathol. 2004;121(6):801803.
  33. Collins SA, Stein DM, Vawdrey DK, Stetson PD, Bakken S. Content overlap in nurse and physician handoff artifacts and the potential role of electronic health records: a systematic review. J Biomed Inform. 2011;44(4):704712.
  34. Laxmisan A, McCoy AB, Wright A, Sittig DF. Clinical summarization capabilities of commercially‐available and internally‐developed electronic health records. Appl Clin Inform. 2012;3(1):8093.
  35. Hunt S, Staggers N. An analysis and recommendations for multidisciplinary computerized handoff applications in hospitals. AMIA Annu Symp Proc. 2011;2011:588597.
Article PDF
Issue
Journal of Hospital Medicine - 10(8)
Publications
Page Number
517-524
Sections
Files
Files
Article PDF
Article PDF

Handoffs among hospital providers are highly error prone and can result in serious morbidity and mortality. Best practices for verbal handoffs have been described[1, 2, 3, 4] and include conducting verbal handoffs face to face, providing opportunities for questions, having the receiver perform a readback, as well as specific content recommendations including action items. Far less research has focused on best practices for printed handoff documents,[5, 6] despite the routine use of written handoff tools as a reference by on‐call physicians.[7, 8] Erroneous or outdated information on the written handoff can mislead on‐call providers, potentially leading to serious medical errors.

In their most basic form, printed handoff documents list patients for whom a provider is responsible. Typically, they also contain demographic information, reason for hospital admission, and a task list for each patient. They may also contain more detailed information on patient history, hospital course, and/or care plan, and may vary among specialties.[9] They come in various forms, ranging from index cards with handwritten notes, to word‐processor or spreadsheet documents, to printed documents that are autopopulated from the electronic health record (EHR).[2] Importantly, printed handoff documents supplement the verbal handoff by allowing receivers to follow along as patients are presented. The concurrent use of written and verbal handoffs may improve retention of clinical information as compared with either alone.[10, 11]

The Joint Commission requires an institutional approach to patient handoffs.[12] The requirements state that handoff communication solutions should take a standardized form, but they do not provide details regarding what data elements should be included in printed or verbal handoffs. Accreditation Council for Graduate Medical Education Common Program Requirements likewise require that residents must become competent in patient handoffs[13] but do not provide specific details or measurement tools. Absent widely accepted guidelines, decisions regarding which elements to include in printed handoff documents are currently made at an individual or institutional level.

The I‐PASS study is a federally funded multi‐institutional project that demonstrated a decrease in medical errors and preventable adverse events after implementation of a standardized resident handoff bundle.[14, 15] The I‐PASS Study Group developed a bundle of handoff interventions, beginning with a handoff and teamwork training program (based in part on TeamSTEPPS [Team Strategies and Tools to Enhance Performance and Patient Safety]),[16] a novel verbal mnemonic, I‐PASS (Illness Severity, Patient Summary, Action List, Situation Awareness and Contingency Planning, and Synthesis by Receiver),[17] and changes to the verbal handoff process, in addition to several other elements.

We hypothesized that developing a standardized printed handoff template would reinforce the handoff training and enhance the value of the verbal handoff process changes. Given the paucity of data on best printed handoff practices, however, we first conducted a needs assessment to identify which data elements were currently contained in printed handoffs across sites, and to allow an expert panel to make recommendations for best practices.

METHODS

I‐PASS Study sites included 9 pediatric residency programs at academic medical centers from across North America. Programs were identified through professional networks and invited to participate. The nonintensive care unit hospitalist services at these medical centers are primarily staffed by residents and medical students with attending supervision. At 1 site, nurse practitioners also participate in care. Additional details about study sites can be found in the study descriptions previously published.[14, 15] All sites received local institutional review board approval.

We began by inviting members of the I‐PASS Education Executive Committee (EEC)[14] to build a collective, comprehensive list of possible data elements for printed handoff documents. This committee included pediatric residency program directors, pediatric hospitalists, education researchers, health services researchers, and patient safety experts. We obtained sample handoff documents from pediatric hospitalist services at each of 9 institutions in the United States and Canada (with protected health information redacted). We reviewed these sample handoff documents to characterize their format and to determine what discrete data elements appeared in each site's printed handoff document. Presence or absence of each data element across sites was tabulated. We also queried sites to determine the feasibility of including elements that were not presently included.

Subsequently, I‐PASS site investigators led structured group interviews at participating sites to gather additional information about handoff practices at each site. These structured group interviews included diverse representation from residents, faculty, and residency program leadership, as well as hospitalists and medical students, to ensure the comprehensive acquisition of information regarding site‐specific characteristics. Each group provided answers to a standardized set of open‐ended questions that addressed current practices, handoff education, simulation use, team structure, and the nature of current written handoff tools, if applicable, at each site. One member of the structured group interview served as a scribe and created a document that summarized the content of the structured group interview meeting and answers to the standardized questions.

Consensus on Content

The initial data collection also included a multivote process[18] of the full I‐PASS EEC to help prioritize data elements. Committee members brainstormed a list of all possible data elements for a printed handoff document. Each member (n=14) was given 10 votes to distribute among the elements. Committee members could assign more than 1 vote to an element to emphasize its importance.

The results of this process as well as the current data elements included in each printed handoff tool were reviewed by a subgroup of the I‐PASS EEC. These expert panel members participated in a series of conference calls during which they tabulated categorical information, reviewed narrative comments, discussed existing evidence, and conducted simple content analysis to identify areas of concordance or discordance. Areas of discordance were discussed by the committee. Disagreements were resolved with group consensus with attention to published evidence or best practices, if available.

Elements were divided into those that were essential (unanimous consensus, no conflicting literature) and those that were recommended (majority supported inclusion of element, no conflicting literature). Ratings were assigned using the American College of Cardiology/American Heart Association framework for practice guidelines,[19] in which each element is assigned a classification (I=effective, II=conflicting evidence/opinion, III=not effective) and a level of evidence to support that classification (A=multiple large randomized controlled trials, B=single randomized trial, or nonrandomized studies, C=expert consensus).

The expert panel reached consensus, through active discussion, on a list of data elements that should be included in an ideal printed handoff document. Elements were chosen based on perceived importance, with attention to published best practices[1, 16] and the multivoting results. In making recommendations, consideration was given to whether data elements could be electronically imported into the printed handoff document from the EHR, or whether they would be entered manually. The potential for serious medical errors due to possible errors in manual entry of data was an important aspect of recommendations made. The list of candidate elements was then reviewed by a larger group of investigators from the I‐PASS Education Executive Committee and Coordinating Council for additional input.

The panel asked site investigators from each participating hospital to gather data on the feasibility of redesigning the printed handoff at that hospital to include each recommended element. Site investigators reported whether each element was already included, possible to include but not included currently, or not currently possible to include within that site's printed handoff tool. Site investigators also reported how data elements were populated in their handoff documents, with options including: (1) autopopulated from administrative data (eg, pharmacy‐entered medication list, demographic data entered by admitting office), (2) autoimported from physicians' free‐text entries elsewhere in the EHR (eg, progress notes), (3) free text entered specifically for the printed handoff, or (4) not applicable (element cannot be included).

RESULTS

Nine programs (100%) provided data on the structure and contents of their printed handoff documents. We found wide variation in structure across the 9 sites. Three sites used a word‐processorbased document that required manual entry of all data elements. The other 6 institutions had a direct link with the EHR to enable autopopulation of between 10 and 20 elements on the printed handoff document.

The content of written handoff documents, as well as the sources of data included in them (present or future), likewise varied substantially across sites (Table 1). Only 4 data elements (name, age, weight, and a list of medications) were universally included at all 9 sites. Among the 6 institutions that linked the printed handoff to the EHR, there was also substantial variation in which elements were autoimported. Only 7 elements were universally autoimported at these 6 sites: patient name, medical record number, room number, weight, date of birth, age, and date of admission. Two elements from the original brainstorming were not presently included in any sites' documents (emergency contact and primary language).

Results of Initial Needs Assessment, With Current and Potential Future Inclusion of Data Elements in Printed Handoff Documents at Nine Study Sites
Data ElementsSites With Data Element Included at Initial Needs Assessment (Out of Nine Sites)Data Source (Current or Anticipated)
Autoimported*Manually EnteredNot Applicable
  • NOTE: *Includes administrative data and free text entered into other electronic health record fields. Manually entered directly into printed handoff document. Data field could not be included due to institutional limitations.

Name9630
Medical record number8630
Room number8630
Allergies6450
Weight9630
Age9630
Date of birth6630
Admission date8630
Attending name5450
Team/service7450
Illness severity1090
Patient summary8090
Action items8090
Situation monitoring/contingency plan5090
Medication name9450
Medication name and dose/route/frequency4450
Code status2270
Labs6540
Access2270
Ins/outs2441
Primary language0360
Vital signs3441
Emergency contact0270
Primary care provider4450

Nine institutions (100%) conducted structured group interviews, ranging in size from 4 to 27 individuals with a median of 5 participants. The documents containing information from each site were provided to the authors. The authors then tabulated categorical information, reviewed narrative comments to understand current institutional practices, and conducted simple content analysis to identify areas of concordance or discordance, particularly with respect to data elements and EHR usage. Based on the results of the printed handoff document review and structured group interviews, with additional perspectives provided by the I‐PASS EEC, the expert panel came to consensus on a list of 23 elements that should be included in printed handoff documents, including 15 essential data elements and 8 additional recommended elements (Table 2).

Rating of Essential and Recommended Data Elements for Printed Handoff Template*
  • NOTE: Abbreviations: I‐PASS, illness severity, patient summary, action list, situation awareness and contingency planning, and synthesis by receiver. *Utilizing American College of Cardiology Foundation and American Heart Association framework for practice guidelines: classification (I=effective, IIa=conflicting evidence/opinion but weight is in favor of usefulness/efficacy, IIb=usefulness/efficacy less well established by evidence/opinion, III=not effective) and level of evidence to support classification (A=multiple large randomized controlled trials, B=single randomized trial or nonrandomized studies, C=expert consensus). Preferably using the I‐PASS categorization of stable/watcher/unstable, but other categorization okay. Refers to common or patient‐specific labs.

Essential Elements
Patient identifiers
Patient name (class I, level of evidence C)
Medical record number (class I, level of evidence C)
Date of birth (class I, level of evidence C)
Hospital service identifiers
Attending name (class I, level of evidence C)
Team/service (class I, level of evidence C)
Room number (class I, level of evidence C)
Admission date (class I, level of evidence C)
Age (class I, level of evidence C)
Weight (class I, level of evidence C)
Illness severity (class I, level of evidence B)[20, 21]
Patient summary (class I, level of evidence B)[21, 22]
Action items (class I, level of evidence B) [21, 22]
Situation awareness/contingency planning (class I, level of evidence B) [21, 22]
Allergies (class I, level of evidence C)
Medications
Autopopulation of medications (class I, level of evidence B)[22, 23, 24]
Free‐text entry of medications (class IIa, level of evidence C)
Recommended elements
Primary language (class IIa, level of evidence C)
Emergency contact (class IIa, level of evidence C)
Primary care provider (class IIa, level of evidence C)
Code status (class IIb, level of evidence C)
Labs (class IIa, level of evidence C)
Access (class IIa, level of evidence C)
Ins/outs (class IIa, level of evidence C)
Vital signs (class IIa, level of evidence C)

Evidence ratings[19] of these elements are included. Several elements are classified as I‐B (effective, nonrandomized studies) based on either studies of individual elements, or greater than 1 study of bundled elements that could reasonably be extrapolated. These include Illness severity,[20, 21] patient summary,[21, 22] action items[21, 22] (to do lists), situation awareness and contingency plan,[21, 22] and medications[22, 23, 24] with attention to importing from the EHR. Medications entered as free text were classified as IIa‐C because of risk and potential significance of errors; in particular there was concern that transcription errors, errors of omission, or errors of commission could potentially lead to patient harms. The remaining essential elements are classified as I‐C (effective, expert consensus). Of note, date of birth was specifically included as a patient identifier, distinct from age, which was felt to be useful as a descriptor (often within a one‐liner or as part of the patient summary).

The 8 recommended elements were elements for which there was not unanimous agreement on inclusion, but the majority of the panel felt they should be included. These elements were classified as IIa‐C, with 1 exception. Code status generated significant controversy among the group. After extensive discussion among the group and consideration of safety, supervision, educational, and pediatric‐specific considerations, all members of the group agreed on the categorization as a recommended element; it is classified as IIb‐C.

All members of the group agreed that data elements should be directly imported from the EHR whenever possible. Finally, members agreed that the elements that make up the I‐PASS mnemonic (illness severity, patient summary, action items, situation awareness/contingency planning) should be listed in that order whenever possible. A sample I‐PASS‐compliant printed handoff document is shown Figure 1.

Figure 1
Sample screenshot of an I‐PASS–compliant handoff report. Abbreviations: I‐PASS, illness severity, patient summary, action list, situation awareness and contingency planning, and synthesis by receiver.

DISCUSSION

We identified substantial variability in the structure and content of printed handoff documents used by 9 pediatric hospitalist teaching services, reflective of a lack of standardization. We found that institutional printed handoff documents shared some demographic elements (eg, name, room, medical record number) but also varied in clinical content (eg, vital signs, lab tests, code status). Our expert panel developed a list of 15 essential and 8 recommended data elements for printed handoff documents. Although this is a large number of fields, the majority of the essential fields were already included by most sites, and many are basic demographic identifiers. Illness severity is the 1 essential field that was not routinely included; however, including this type of overview is consistently recommended[2, 4] and supported by evidence,[20, 21] and contributes to building a shared mental model.[16] We recommend the categories of stable/watcher/unstable.[17]

Several prior single‐center studies have found that introducing a printed handoff document can lead to improvements in workflow, communication, and patient safety. In an early study, Petersen et al.[25] showed an association between use of a computerized sign‐out program and reduced odds of preventable adverse events during periods of cross‐coverage. Wayne et al.[26] reported fewer perceived inaccuracies in handoff documents as well as improved clarity at the time of transfer, supporting the role for standardization. Van Eaton et al.[27] demonstrated rapid uptake and desirability of a computerized handoff document, which combined autoimportation of information from an EHR with resident‐entered patient details, reflecting the importance of both data sources. In addition, they demonstrated improvements in both the rounding and sign‐out processes.[28]

Two studies specifically reported the increased use of specific fields after implementation. Payne et al. implemented a Web‐based handoff tool and documented significant increases in the number of handoffs containing problem lists, medication lists, and code status, accompanied by perceived improvements in quality of handoffs and fewer near‐miss events.[24] Starmer et al. found that introduction of a resident handoff bundle that included a printed handoff tool led to reduction in medical errors and adverse events.[22] The study group using the tool populated 11 data elements more often after implementation, and introduction of this printed handoff tool in particular was associated with reductions in written handoff miscommunications. Neither of these studies included subanalysis to indicate which data elements may have been most important.

In contrast to previous single‐institution studies, our recommendations for a printed handoff template come from evaluations of tools and discussions with front line providers across 9 institutions. We had substantial overlap with data elements recommended by Van Eaton et al.[27] However, there were several areas in which we did not have overlap with published templates including weight, ins/outs, primary language, emergency contact information, or primary care provider. Other published handoff tools have been highly specialized (eg, for cardiac intensive care) or included many fewer data elements than our group felt were essential. These differences may reflect the unique aspects of caring for pediatric patients (eg, need for weights) and the absence of defined protocols for many pediatric conditions. In addition, the level of detail needed for contingency planning may vary between teaching and nonteaching services.

Resident physicians may provide valuable information in the development of standardized handoff documents. Clark et al.,[29] at Virginia Mason University, utilized resident‐driven continuous quality improvement processes including real‐time feedback to implement an electronic template. They found that engagement of both senior leaders and front‐line users was an important component of their success in uptake. Our study utilized residents as essential members of structured group interviews to ensure that front‐line users' needs were represented as recommendations for a printed handoff tool template were developed.

As previously described,[17] our study group had identified several key data elements that should be included in verbal handoffs: illness severity, a patient summary, a discrete action list, situation awareness/contingency planning, and a synthesis by receiver. With consideration of the multivoting results as well as known best practices,[1, 4, 12] the expert panel for this study agreed that each of these elements should also be highlighted in the printed template to ensure consistency between the printed document and the verbal handoff, and to have each reinforce the other. On the printed handoff tool, the final S in the I‐PASS mnemonic (synthesis by receiver) cannot be prepopulated, but considering the importance of this step,[16, 30, 31, 32] it should be printed as synthesis by receiver to serve as a text‐reminder to both givers and receivers.

The panel also felt, however, that the printed handoff document should provide additional background information not routinely included in a verbal handoff. It should serve as a reference tool both at the time of verbal handoff and throughout the day and night, and therefore should include more comprehensive information than is necessary or appropriate to convey during the verbal handoff. We identified 10 data elements that are essential in a printed handoff document in addition to the I‐PASS elements (Table 2).

Patient demographic data elements, as well as team assignments and attending physician, were uniformly supported for inclusion. The medication list was viewed as essential; however, the panel also recognized the potential for medical errors due to inaccuracies in the medication list. In particular, there was concern that including all fields of a medication order (drug, dose, route, frequency) would result in handoffs containing a high proportion of inaccurate information, particularly for complex patients whose medication regimens may vary over the course of hospitalization. Therefore, the panel agreed that if medication lists were entered manually, then only the medication name should be included as they did not wish to perpetuate inaccurate or potentially harmful information. If medication lists were autoimported from an EHR, then they should include drug name, dose, route, and frequency if possible.

In the I‐PASS study,[15] all institutions implemented printed handoff documents that included fields for the essential data elements. After implementation, there was a significant increase in completion of all essential fields. Although there is limited evidence to support any individual data element, increased usage of these elements was associated with the overall study finding of decreased rates of medical errors and preventable adverse events.

EHRs have the potential to help standardize printed handoff documents[5, 6, 33, 34, 35]; all participants in our study agreed that printed handoff documents should ideally be linked with the EHR and should autoimport data wherever appropriate. Manually populated (eg, word processor‐ or spreadsheet‐based) handoff tools have important limitations, particularly related to the potential for typographical errors as well as accidental omission of data fields, and lead to unnecessary duplication of work (eg, re‐entering data already included in a progress note) that can waste providers' time. It was also acknowledged that word processor‐ or spreadsheet‐based documents may have flexibility that is lacking in EHR‐based handoff documents. For example, formatting can more easily be adjusted to increase the number of patients per printed page. As technology advances, printed documents may be phased out in favor of EHR‐based on‐screen reports, which by their nature would be more accurate due to real‐time autoupdates.

In making recommendations about essential versus recommended items for inclusion in the printed handoff template, the only data element that generated controversy among our experts was code status. Some felt that it should be included as an essential element, whereas others did not. We believe that this was unique to our practice in pediatric hospital ward settings, as codes in most pediatric ward settings are rare. Among the concerns expressed with including code status for all patients were that residents might assume patients were full‐code without verifying. The potential inaccuracy created by this might have severe implications. Alternatively, residents might feel obligated to have code discussions with all patients regardless of severity of illness, which may be inappropriate in a pediatric population. Several educators expressed concerns about trainees having unsupervised code‐status conversations with families of pediatric patients. Conversely, although codes are rare in pediatric ward settings, concerns were raised that not including code status could be problematic during these rare but critically important events. Other fields, such as weight, might have less relevance for an adult population in which emergency drug doses are standardized.

Limitations

Our study has several limitations. We only collected data from hospitalist services at pediatric sites. It is likely that providers in other specialties would have specific data elements they felt were essential (eg, postoperative day, code status). Our methodology was expert consensus based, driven by data collection from sites that were already participating in the I‐PASS study. Although the I‐PASS study demonstrated decreased rates of medical errors and preventable adverse events with inclusion of these data elements as part of a bundle, future research will be required to evaluate whether some of these items are more important than others in improving written communication and ultimately patient safety. In spite of these limitations, our work represents an important starting point for the development of standards for written handoff documents that should be used in patient handoffs, particularly those generated from EHRs.

CONCLUSIONS

In this article we describe the results of a needs assessment that informed expert consensus‐based recommendations for data elements to include in a printed handoff document. We recommend that pediatric programs include the elements identified as part of a standardized written handoff tool. Although many of these elements are also applicable to other specialties, future work should be conducted to adapt the printed handoff document elements described here for use in other specialties and settings. Future studies should work to validate the importance of these elements, studying the manner in which their inclusion affects the quality of written handoffs, and ultimately patient safety.

Acknowledgements

Members of the I‐PASS Study Education Executive Committee who contributed to this manuscript include: Boston Children's Hospital/Harvard Medical School (primary site) (Christopher P. Landrigan, MD, MPH, Elizabeth L. Noble, BA. Theodore C. Sectish, MD. Lisa L. Tse, BA). Cincinnati Children's Hospital Medical Center/University of Cincinnati College of Medicine (Jennifer K. O'Toole, MD, MEd). Doernbecher Children's Hospital/Oregon Health and Science University (Amy J. Starmer, MD, MPH). Hospital for Sick Children/University of Toronto (Zia Bismilla, MD. Maitreya Coffey, MD). Lucile Packard Children's Hospital/Stanford University (Lauren A. Destino, MD. Jennifer L. Everhart, MD. Shilpa J. Patel, MD [currently at Kapi'olani Children's Hospital/University of Hawai'i School of Medicine]). National Capital Consortium (Jennifer H. Hepps, MD. Joseph O. Lopreiato, MD, MPH. Clifton E. Yu, MD). Primary Children's Medical Center/University of Utah (James F. Bale, Jr., MD. Adam T. Stevenson, MD). St. Louis Children's Hospital/Washington University (F. Sessions Cole, MD). St. Christopher's Hospital for Children/Drexel University College of Medicine (Sharon Calaman, MD. Nancy D. Spector, MD). Benioff Children's Hospital/University of California San Francisco School of Medicine (Glenn Rosenbluth, MD. Daniel C. West, MD).

Additional I‐PASS Study Group members who contributed to this manuscript include April D. Allen, MPA, MA (Heller School for Social Policy and Management, Brandeis University, previously affiliated with Boston Children's Hospital), Madelyn D. Kahana, MD (The Children's Hospital at Montefiore/Albert Einstein College of Medicine, previously affiliated with Lucile Packard Children's Hospital/Stanford University), Robert S. McGregor, MD (Akron Children's Hospital/Northeast Ohio Medical University, previously affiliated with St. Christopher's Hospital for Children/Drexel University), and John S. Webster, MD, MBA, MS (Webster Healthcare Consulting Inc., formerly of the Department of Defense).

Members of the I‐PASS Study Group include individuals from the institutions listed below as follows: Boston Children's Hospital/Harvard Medical School (primary site): April D. Allen, MPA, MA (currently at Heller School for Social Policy and Management, Brandeis University), Angela M. Feraco, MD, Christopher P. Landrigan, MD, MPH, Elizabeth L. Noble, BA, Theodore C. Sectish, MD, Lisa L. Tse, BA. Brigham and Women's Hospital (data coordinating center): Anuj K. Dalal, MD, Carol A. Keohane, BSN, RN, Stuart Lipsitz, PhD, Jeffrey M. Rothschild, MD, MPH, Matt F. Wien, BS, Catherine S. Yoon, MS, Katherine R. Zigmont, BSN, RN. Cincinnati Children's Hospital Medical Center/University of Cincinnati College of Medicine: Javier Gonzalez del Rey, MD, MEd, Jennifer K. O'Toole, MD, MEd, Lauren G. Solan, MD. Doernbecher Children's Hospital/Oregon Health and Science University: Megan E. Aylor, MD, Amy J. Starmer, MD, MPH, Windy Stevenson, MD, Tamara Wagner, MD. Hospital for Sick Children/University of Toronto: Zia Bismilla, MD, Maitreya Coffey, MD, Sanjay Mahant, MD, MSc. Lucile Packard Children's Hospital/Stanford University: Rebecca L. Blankenburg, MD, MPH, Lauren A. Destino, MD, Jennifer L. Everhart, MD, Madelyn Kahana, MD, Shilpa J. Patel, MD (currently at Kapi'olani Children's Hospital/University of Hawaii School of Medicine). National Capital Consortium: Jennifer H. Hepps, MD, Joseph O. Lopreiato, MD, MPH, Clifton E. Yu, MD. Primary Children's Hospital/University of Utah: James F. Bale, Jr., MD, Jaime Blank Spackman, MSHS, CCRP, Rajendu Srivastava, MD, FRCP(C), MPH, Adam Stevenson, MD. St. Louis Children's Hospital/Washington University: Kevin Barton, MD, Kathleen Berchelmann, MD, F. Sessions Cole, MD, Christine Hrach, MD, Kyle S. Schultz, MD, Michael P. Turmelle, MD, Andrew J. White, MD. St. Christopher's Hospital for Children/Drexel University: Sharon Calaman, MD, Bronwyn D. Carlson, MD, Robert S. McGregor, MD (currently at Akron Children's Hospital/Northeast Ohio Medical University), Vahideh Nilforoshan, MD, Nancy D. Spector, MD. and Benioff Children's Hospital/University of California San Francisco School of Medicine: Glenn Rosenbluth, MD, Daniel C. West, MD. Dorene Balmer, PhD, RD, Carol L. Carraccio, MD, MA, Laura Degnon, CAE, and David McDonald, and Alan Schwartz PhD serve the I‐PASS Study Group as part of the IIPE. Karen M. Wilson, MD, MPH serves the I‐PASS Study Group as part of the advisory board from the PRIS Executive Council. John Webster served the I‐PASS Study Group and Education Executive Committee as a representative from TeamSTEPPS.

Disclosures: The I‐PASS Study was primarily supported by the US Department of Health and Human Services, Office of the Assistant Secretary for Planning and Evaluation (1R18AE000029‐01). The opinions and conclusions expressed herein are solely those of the author(s) and should not be constructed as representing the opinions or policy of any agency of the federal government. Developed with input from the Initiative for Innovation in Pediatric Education and the Pediatric Research in Inpatient Settings Network (supported by the Children's Hospital Association, the Academic Pediatric Association, the American Academy of Pediatrics, and the Society of Hospital Medicine). A. J. S. was supported by the Agency for Healthcare Research and Quality/Oregon Comparative Effectiveness Research K12 Program (1K12HS019456‐01). Additional funding for the I‐PASS Study was provided by the Medical Research Foundation of Oregon, Physician Services Incorporated Foundation (Ontario, Canada), and Pfizer (unrestricted medical education grant to N.D.S.). C.P.L, A.J.S. were supported by the Oregon Comparative Effectiveness Research K12 Program (1K12HS019456 from the Agency for Healthcare Research and Quality). A.J.S. was also supported by the Medical Research Foundation of Oregon. The authors report no conflicts of interest.

Handoffs among hospital providers are highly error prone and can result in serious morbidity and mortality. Best practices for verbal handoffs have been described[1, 2, 3, 4] and include conducting verbal handoffs face to face, providing opportunities for questions, having the receiver perform a readback, as well as specific content recommendations including action items. Far less research has focused on best practices for printed handoff documents,[5, 6] despite the routine use of written handoff tools as a reference by on‐call physicians.[7, 8] Erroneous or outdated information on the written handoff can mislead on‐call providers, potentially leading to serious medical errors.

In their most basic form, printed handoff documents list patients for whom a provider is responsible. Typically, they also contain demographic information, reason for hospital admission, and a task list for each patient. They may also contain more detailed information on patient history, hospital course, and/or care plan, and may vary among specialties.[9] They come in various forms, ranging from index cards with handwritten notes, to word‐processor or spreadsheet documents, to printed documents that are autopopulated from the electronic health record (EHR).[2] Importantly, printed handoff documents supplement the verbal handoff by allowing receivers to follow along as patients are presented. The concurrent use of written and verbal handoffs may improve retention of clinical information as compared with either alone.[10, 11]

The Joint Commission requires an institutional approach to patient handoffs.[12] The requirements state that handoff communication solutions should take a standardized form, but they do not provide details regarding what data elements should be included in printed or verbal handoffs. Accreditation Council for Graduate Medical Education Common Program Requirements likewise require that residents must become competent in patient handoffs[13] but do not provide specific details or measurement tools. Absent widely accepted guidelines, decisions regarding which elements to include in printed handoff documents are currently made at an individual or institutional level.

The I‐PASS study is a federally funded multi‐institutional project that demonstrated a decrease in medical errors and preventable adverse events after implementation of a standardized resident handoff bundle.[14, 15] The I‐PASS Study Group developed a bundle of handoff interventions, beginning with a handoff and teamwork training program (based in part on TeamSTEPPS [Team Strategies and Tools to Enhance Performance and Patient Safety]),[16] a novel verbal mnemonic, I‐PASS (Illness Severity, Patient Summary, Action List, Situation Awareness and Contingency Planning, and Synthesis by Receiver),[17] and changes to the verbal handoff process, in addition to several other elements.

We hypothesized that developing a standardized printed handoff template would reinforce the handoff training and enhance the value of the verbal handoff process changes. Given the paucity of data on best printed handoff practices, however, we first conducted a needs assessment to identify which data elements were currently contained in printed handoffs across sites, and to allow an expert panel to make recommendations for best practices.

METHODS

I‐PASS Study sites included 9 pediatric residency programs at academic medical centers from across North America. Programs were identified through professional networks and invited to participate. The nonintensive care unit hospitalist services at these medical centers are primarily staffed by residents and medical students with attending supervision. At 1 site, nurse practitioners also participate in care. Additional details about study sites can be found in the study descriptions previously published.[14, 15] All sites received local institutional review board approval.

We began by inviting members of the I‐PASS Education Executive Committee (EEC)[14] to build a collective, comprehensive list of possible data elements for printed handoff documents. This committee included pediatric residency program directors, pediatric hospitalists, education researchers, health services researchers, and patient safety experts. We obtained sample handoff documents from pediatric hospitalist services at each of 9 institutions in the United States and Canada (with protected health information redacted). We reviewed these sample handoff documents to characterize their format and to determine what discrete data elements appeared in each site's printed handoff document. Presence or absence of each data element across sites was tabulated. We also queried sites to determine the feasibility of including elements that were not presently included.

Subsequently, I‐PASS site investigators led structured group interviews at participating sites to gather additional information about handoff practices at each site. These structured group interviews included diverse representation from residents, faculty, and residency program leadership, as well as hospitalists and medical students, to ensure the comprehensive acquisition of information regarding site‐specific characteristics. Each group provided answers to a standardized set of open‐ended questions that addressed current practices, handoff education, simulation use, team structure, and the nature of current written handoff tools, if applicable, at each site. One member of the structured group interview served as a scribe and created a document that summarized the content of the structured group interview meeting and answers to the standardized questions.

Consensus on Content

The initial data collection also included a multivote process[18] of the full I‐PASS EEC to help prioritize data elements. Committee members brainstormed a list of all possible data elements for a printed handoff document. Each member (n=14) was given 10 votes to distribute among the elements. Committee members could assign more than 1 vote to an element to emphasize its importance.

The results of this process as well as the current data elements included in each printed handoff tool were reviewed by a subgroup of the I‐PASS EEC. These expert panel members participated in a series of conference calls during which they tabulated categorical information, reviewed narrative comments, discussed existing evidence, and conducted simple content analysis to identify areas of concordance or discordance. Areas of discordance were discussed by the committee. Disagreements were resolved with group consensus with attention to published evidence or best practices, if available.

Elements were divided into those that were essential (unanimous consensus, no conflicting literature) and those that were recommended (majority supported inclusion of element, no conflicting literature). Ratings were assigned using the American College of Cardiology/American Heart Association framework for practice guidelines,[19] in which each element is assigned a classification (I=effective, II=conflicting evidence/opinion, III=not effective) and a level of evidence to support that classification (A=multiple large randomized controlled trials, B=single randomized trial, or nonrandomized studies, C=expert consensus).

The expert panel reached consensus, through active discussion, on a list of data elements that should be included in an ideal printed handoff document. Elements were chosen based on perceived importance, with attention to published best practices[1, 16] and the multivoting results. In making recommendations, consideration was given to whether data elements could be electronically imported into the printed handoff document from the EHR, or whether they would be entered manually. The potential for serious medical errors due to possible errors in manual entry of data was an important aspect of recommendations made. The list of candidate elements was then reviewed by a larger group of investigators from the I‐PASS Education Executive Committee and Coordinating Council for additional input.

The panel asked site investigators from each participating hospital to gather data on the feasibility of redesigning the printed handoff at that hospital to include each recommended element. Site investigators reported whether each element was already included, possible to include but not included currently, or not currently possible to include within that site's printed handoff tool. Site investigators also reported how data elements were populated in their handoff documents, with options including: (1) autopopulated from administrative data (eg, pharmacy‐entered medication list, demographic data entered by admitting office), (2) autoimported from physicians' free‐text entries elsewhere in the EHR (eg, progress notes), (3) free text entered specifically for the printed handoff, or (4) not applicable (element cannot be included).

RESULTS

Nine programs (100%) provided data on the structure and contents of their printed handoff documents. We found wide variation in structure across the 9 sites. Three sites used a word‐processorbased document that required manual entry of all data elements. The other 6 institutions had a direct link with the EHR to enable autopopulation of between 10 and 20 elements on the printed handoff document.

The content of written handoff documents, as well as the sources of data included in them (present or future), likewise varied substantially across sites (Table 1). Only 4 data elements (name, age, weight, and a list of medications) were universally included at all 9 sites. Among the 6 institutions that linked the printed handoff to the EHR, there was also substantial variation in which elements were autoimported. Only 7 elements were universally autoimported at these 6 sites: patient name, medical record number, room number, weight, date of birth, age, and date of admission. Two elements from the original brainstorming were not presently included in any sites' documents (emergency contact and primary language).

Results of Initial Needs Assessment, With Current and Potential Future Inclusion of Data Elements in Printed Handoff Documents at Nine Study Sites
Data ElementsSites With Data Element Included at Initial Needs Assessment (Out of Nine Sites)Data Source (Current or Anticipated)
Autoimported*Manually EnteredNot Applicable
  • NOTE: *Includes administrative data and free text entered into other electronic health record fields. Manually entered directly into printed handoff document. Data field could not be included due to institutional limitations.

Name9630
Medical record number8630
Room number8630
Allergies6450
Weight9630
Age9630
Date of birth6630
Admission date8630
Attending name5450
Team/service7450
Illness severity1090
Patient summary8090
Action items8090
Situation monitoring/contingency plan5090
Medication name9450
Medication name and dose/route/frequency4450
Code status2270
Labs6540
Access2270
Ins/outs2441
Primary language0360
Vital signs3441
Emergency contact0270
Primary care provider4450

Nine institutions (100%) conducted structured group interviews, ranging in size from 4 to 27 individuals with a median of 5 participants. The documents containing information from each site were provided to the authors. The authors then tabulated categorical information, reviewed narrative comments to understand current institutional practices, and conducted simple content analysis to identify areas of concordance or discordance, particularly with respect to data elements and EHR usage. Based on the results of the printed handoff document review and structured group interviews, with additional perspectives provided by the I‐PASS EEC, the expert panel came to consensus on a list of 23 elements that should be included in printed handoff documents, including 15 essential data elements and 8 additional recommended elements (Table 2).

Rating of Essential and Recommended Data Elements for Printed Handoff Template*
  • NOTE: Abbreviations: I‐PASS, illness severity, patient summary, action list, situation awareness and contingency planning, and synthesis by receiver. *Utilizing American College of Cardiology Foundation and American Heart Association framework for practice guidelines: classification (I=effective, IIa=conflicting evidence/opinion but weight is in favor of usefulness/efficacy, IIb=usefulness/efficacy less well established by evidence/opinion, III=not effective) and level of evidence to support classification (A=multiple large randomized controlled trials, B=single randomized trial or nonrandomized studies, C=expert consensus). Preferably using the I‐PASS categorization of stable/watcher/unstable, but other categorization okay. Refers to common or patient‐specific labs.

Essential Elements
Patient identifiers
Patient name (class I, level of evidence C)
Medical record number (class I, level of evidence C)
Date of birth (class I, level of evidence C)
Hospital service identifiers
Attending name (class I, level of evidence C)
Team/service (class I, level of evidence C)
Room number (class I, level of evidence C)
Admission date (class I, level of evidence C)
Age (class I, level of evidence C)
Weight (class I, level of evidence C)
Illness severity (class I, level of evidence B)[20, 21]
Patient summary (class I, level of evidence B)[21, 22]
Action items (class I, level of evidence B) [21, 22]
Situation awareness/contingency planning (class I, level of evidence B) [21, 22]
Allergies (class I, level of evidence C)
Medications
Autopopulation of medications (class I, level of evidence B)[22, 23, 24]
Free‐text entry of medications (class IIa, level of evidence C)
Recommended elements
Primary language (class IIa, level of evidence C)
Emergency contact (class IIa, level of evidence C)
Primary care provider (class IIa, level of evidence C)
Code status (class IIb, level of evidence C)
Labs (class IIa, level of evidence C)
Access (class IIa, level of evidence C)
Ins/outs (class IIa, level of evidence C)
Vital signs (class IIa, level of evidence C)

Evidence ratings[19] of these elements are included. Several elements are classified as I‐B (effective, nonrandomized studies) based on either studies of individual elements, or greater than 1 study of bundled elements that could reasonably be extrapolated. These include Illness severity,[20, 21] patient summary,[21, 22] action items[21, 22] (to do lists), situation awareness and contingency plan,[21, 22] and medications[22, 23, 24] with attention to importing from the EHR. Medications entered as free text were classified as IIa‐C because of risk and potential significance of errors; in particular there was concern that transcription errors, errors of omission, or errors of commission could potentially lead to patient harms. The remaining essential elements are classified as I‐C (effective, expert consensus). Of note, date of birth was specifically included as a patient identifier, distinct from age, which was felt to be useful as a descriptor (often within a one‐liner or as part of the patient summary).

The 8 recommended elements were elements for which there was not unanimous agreement on inclusion, but the majority of the panel felt they should be included. These elements were classified as IIa‐C, with 1 exception. Code status generated significant controversy among the group. After extensive discussion among the group and consideration of safety, supervision, educational, and pediatric‐specific considerations, all members of the group agreed on the categorization as a recommended element; it is classified as IIb‐C.

All members of the group agreed that data elements should be directly imported from the EHR whenever possible. Finally, members agreed that the elements that make up the I‐PASS mnemonic (illness severity, patient summary, action items, situation awareness/contingency planning) should be listed in that order whenever possible. A sample I‐PASS‐compliant printed handoff document is shown Figure 1.

Figure 1
Sample screenshot of an I‐PASS–compliant handoff report. Abbreviations: I‐PASS, illness severity, patient summary, action list, situation awareness and contingency planning, and synthesis by receiver.

DISCUSSION

We identified substantial variability in the structure and content of printed handoff documents used by 9 pediatric hospitalist teaching services, reflective of a lack of standardization. We found that institutional printed handoff documents shared some demographic elements (eg, name, room, medical record number) but also varied in clinical content (eg, vital signs, lab tests, code status). Our expert panel developed a list of 15 essential and 8 recommended data elements for printed handoff documents. Although this is a large number of fields, the majority of the essential fields were already included by most sites, and many are basic demographic identifiers. Illness severity is the 1 essential field that was not routinely included; however, including this type of overview is consistently recommended[2, 4] and supported by evidence,[20, 21] and contributes to building a shared mental model.[16] We recommend the categories of stable/watcher/unstable.[17]

Several prior single‐center studies have found that introducing a printed handoff document can lead to improvements in workflow, communication, and patient safety. In an early study, Petersen et al.[25] showed an association between use of a computerized sign‐out program and reduced odds of preventable adverse events during periods of cross‐coverage. Wayne et al.[26] reported fewer perceived inaccuracies in handoff documents as well as improved clarity at the time of transfer, supporting the role for standardization. Van Eaton et al.[27] demonstrated rapid uptake and desirability of a computerized handoff document, which combined autoimportation of information from an EHR with resident‐entered patient details, reflecting the importance of both data sources. In addition, they demonstrated improvements in both the rounding and sign‐out processes.[28]

Two studies specifically reported the increased use of specific fields after implementation. Payne et al. implemented a Web‐based handoff tool and documented significant increases in the number of handoffs containing problem lists, medication lists, and code status, accompanied by perceived improvements in quality of handoffs and fewer near‐miss events.[24] Starmer et al. found that introduction of a resident handoff bundle that included a printed handoff tool led to reduction in medical errors and adverse events.[22] The study group using the tool populated 11 data elements more often after implementation, and introduction of this printed handoff tool in particular was associated with reductions in written handoff miscommunications. Neither of these studies included subanalysis to indicate which data elements may have been most important.

In contrast to previous single‐institution studies, our recommendations for a printed handoff template come from evaluations of tools and discussions with front line providers across 9 institutions. We had substantial overlap with data elements recommended by Van Eaton et al.[27] However, there were several areas in which we did not have overlap with published templates including weight, ins/outs, primary language, emergency contact information, or primary care provider. Other published handoff tools have been highly specialized (eg, for cardiac intensive care) or included many fewer data elements than our group felt were essential. These differences may reflect the unique aspects of caring for pediatric patients (eg, need for weights) and the absence of defined protocols for many pediatric conditions. In addition, the level of detail needed for contingency planning may vary between teaching and nonteaching services.

Resident physicians may provide valuable information in the development of standardized handoff documents. Clark et al.,[29] at Virginia Mason University, utilized resident‐driven continuous quality improvement processes including real‐time feedback to implement an electronic template. They found that engagement of both senior leaders and front‐line users was an important component of their success in uptake. Our study utilized residents as essential members of structured group interviews to ensure that front‐line users' needs were represented as recommendations for a printed handoff tool template were developed.

As previously described,[17] our study group had identified several key data elements that should be included in verbal handoffs: illness severity, a patient summary, a discrete action list, situation awareness/contingency planning, and a synthesis by receiver. With consideration of the multivoting results as well as known best practices,[1, 4, 12] the expert panel for this study agreed that each of these elements should also be highlighted in the printed template to ensure consistency between the printed document and the verbal handoff, and to have each reinforce the other. On the printed handoff tool, the final S in the I‐PASS mnemonic (synthesis by receiver) cannot be prepopulated, but considering the importance of this step,[16, 30, 31, 32] it should be printed as synthesis by receiver to serve as a text‐reminder to both givers and receivers.

The panel also felt, however, that the printed handoff document should provide additional background information not routinely included in a verbal handoff. It should serve as a reference tool both at the time of verbal handoff and throughout the day and night, and therefore should include more comprehensive information than is necessary or appropriate to convey during the verbal handoff. We identified 10 data elements that are essential in a printed handoff document in addition to the I‐PASS elements (Table 2).

Patient demographic data elements, as well as team assignments and attending physician, were uniformly supported for inclusion. The medication list was viewed as essential; however, the panel also recognized the potential for medical errors due to inaccuracies in the medication list. In particular, there was concern that including all fields of a medication order (drug, dose, route, frequency) would result in handoffs containing a high proportion of inaccurate information, particularly for complex patients whose medication regimens may vary over the course of hospitalization. Therefore, the panel agreed that if medication lists were entered manually, then only the medication name should be included as they did not wish to perpetuate inaccurate or potentially harmful information. If medication lists were autoimported from an EHR, then they should include drug name, dose, route, and frequency if possible.

In the I‐PASS study,[15] all institutions implemented printed handoff documents that included fields for the essential data elements. After implementation, there was a significant increase in completion of all essential fields. Although there is limited evidence to support any individual data element, increased usage of these elements was associated with the overall study finding of decreased rates of medical errors and preventable adverse events.

EHRs have the potential to help standardize printed handoff documents[5, 6, 33, 34, 35]; all participants in our study agreed that printed handoff documents should ideally be linked with the EHR and should autoimport data wherever appropriate. Manually populated (eg, word processor‐ or spreadsheet‐based) handoff tools have important limitations, particularly related to the potential for typographical errors as well as accidental omission of data fields, and lead to unnecessary duplication of work (eg, re‐entering data already included in a progress note) that can waste providers' time. It was also acknowledged that word processor‐ or spreadsheet‐based documents may have flexibility that is lacking in EHR‐based handoff documents. For example, formatting can more easily be adjusted to increase the number of patients per printed page. As technology advances, printed documents may be phased out in favor of EHR‐based on‐screen reports, which by their nature would be more accurate due to real‐time autoupdates.

In making recommendations about essential versus recommended items for inclusion in the printed handoff template, the only data element that generated controversy among our experts was code status. Some felt that it should be included as an essential element, whereas others did not. We believe that this was unique to our practice in pediatric hospital ward settings, as codes in most pediatric ward settings are rare. Among the concerns expressed with including code status for all patients were that residents might assume patients were full‐code without verifying. The potential inaccuracy created by this might have severe implications. Alternatively, residents might feel obligated to have code discussions with all patients regardless of severity of illness, which may be inappropriate in a pediatric population. Several educators expressed concerns about trainees having unsupervised code‐status conversations with families of pediatric patients. Conversely, although codes are rare in pediatric ward settings, concerns were raised that not including code status could be problematic during these rare but critically important events. Other fields, such as weight, might have less relevance for an adult population in which emergency drug doses are standardized.

Limitations

Our study has several limitations. We only collected data from hospitalist services at pediatric sites. It is likely that providers in other specialties would have specific data elements they felt were essential (eg, postoperative day, code status). Our methodology was expert consensus based, driven by data collection from sites that were already participating in the I‐PASS study. Although the I‐PASS study demonstrated decreased rates of medical errors and preventable adverse events with inclusion of these data elements as part of a bundle, future research will be required to evaluate whether some of these items are more important than others in improving written communication and ultimately patient safety. In spite of these limitations, our work represents an important starting point for the development of standards for written handoff documents that should be used in patient handoffs, particularly those generated from EHRs.

CONCLUSIONS

In this article we describe the results of a needs assessment that informed expert consensus‐based recommendations for data elements to include in a printed handoff document. We recommend that pediatric programs include the elements identified as part of a standardized written handoff tool. Although many of these elements are also applicable to other specialties, future work should be conducted to adapt the printed handoff document elements described here for use in other specialties and settings. Future studies should work to validate the importance of these elements, studying the manner in which their inclusion affects the quality of written handoffs, and ultimately patient safety.

Acknowledgements

Members of the I‐PASS Study Education Executive Committee who contributed to this manuscript include: Boston Children's Hospital/Harvard Medical School (primary site) (Christopher P. Landrigan, MD, MPH, Elizabeth L. Noble, BA. Theodore C. Sectish, MD. Lisa L. Tse, BA). Cincinnati Children's Hospital Medical Center/University of Cincinnati College of Medicine (Jennifer K. O'Toole, MD, MEd). Doernbecher Children's Hospital/Oregon Health and Science University (Amy J. Starmer, MD, MPH). Hospital for Sick Children/University of Toronto (Zia Bismilla, MD. Maitreya Coffey, MD). Lucile Packard Children's Hospital/Stanford University (Lauren A. Destino, MD. Jennifer L. Everhart, MD. Shilpa J. Patel, MD [currently at Kapi'olani Children's Hospital/University of Hawai'i School of Medicine]). National Capital Consortium (Jennifer H. Hepps, MD. Joseph O. Lopreiato, MD, MPH. Clifton E. Yu, MD). Primary Children's Medical Center/University of Utah (James F. Bale, Jr., MD. Adam T. Stevenson, MD). St. Louis Children's Hospital/Washington University (F. Sessions Cole, MD). St. Christopher's Hospital for Children/Drexel University College of Medicine (Sharon Calaman, MD. Nancy D. Spector, MD). Benioff Children's Hospital/University of California San Francisco School of Medicine (Glenn Rosenbluth, MD. Daniel C. West, MD).

Additional I‐PASS Study Group members who contributed to this manuscript include April D. Allen, MPA, MA (Heller School for Social Policy and Management, Brandeis University, previously affiliated with Boston Children's Hospital), Madelyn D. Kahana, MD (The Children's Hospital at Montefiore/Albert Einstein College of Medicine, previously affiliated with Lucile Packard Children's Hospital/Stanford University), Robert S. McGregor, MD (Akron Children's Hospital/Northeast Ohio Medical University, previously affiliated with St. Christopher's Hospital for Children/Drexel University), and John S. Webster, MD, MBA, MS (Webster Healthcare Consulting Inc., formerly of the Department of Defense).

Members of the I‐PASS Study Group include individuals from the institutions listed below as follows: Boston Children's Hospital/Harvard Medical School (primary site): April D. Allen, MPA, MA (currently at Heller School for Social Policy and Management, Brandeis University), Angela M. Feraco, MD, Christopher P. Landrigan, MD, MPH, Elizabeth L. Noble, BA, Theodore C. Sectish, MD, Lisa L. Tse, BA. Brigham and Women's Hospital (data coordinating center): Anuj K. Dalal, MD, Carol A. Keohane, BSN, RN, Stuart Lipsitz, PhD, Jeffrey M. Rothschild, MD, MPH, Matt F. Wien, BS, Catherine S. Yoon, MS, Katherine R. Zigmont, BSN, RN. Cincinnati Children's Hospital Medical Center/University of Cincinnati College of Medicine: Javier Gonzalez del Rey, MD, MEd, Jennifer K. O'Toole, MD, MEd, Lauren G. Solan, MD. Doernbecher Children's Hospital/Oregon Health and Science University: Megan E. Aylor, MD, Amy J. Starmer, MD, MPH, Windy Stevenson, MD, Tamara Wagner, MD. Hospital for Sick Children/University of Toronto: Zia Bismilla, MD, Maitreya Coffey, MD, Sanjay Mahant, MD, MSc. Lucile Packard Children's Hospital/Stanford University: Rebecca L. Blankenburg, MD, MPH, Lauren A. Destino, MD, Jennifer L. Everhart, MD, Madelyn Kahana, MD, Shilpa J. Patel, MD (currently at Kapi'olani Children's Hospital/University of Hawaii School of Medicine). National Capital Consortium: Jennifer H. Hepps, MD, Joseph O. Lopreiato, MD, MPH, Clifton E. Yu, MD. Primary Children's Hospital/University of Utah: James F. Bale, Jr., MD, Jaime Blank Spackman, MSHS, CCRP, Rajendu Srivastava, MD, FRCP(C), MPH, Adam Stevenson, MD. St. Louis Children's Hospital/Washington University: Kevin Barton, MD, Kathleen Berchelmann, MD, F. Sessions Cole, MD, Christine Hrach, MD, Kyle S. Schultz, MD, Michael P. Turmelle, MD, Andrew J. White, MD. St. Christopher's Hospital for Children/Drexel University: Sharon Calaman, MD, Bronwyn D. Carlson, MD, Robert S. McGregor, MD (currently at Akron Children's Hospital/Northeast Ohio Medical University), Vahideh Nilforoshan, MD, Nancy D. Spector, MD. and Benioff Children's Hospital/University of California San Francisco School of Medicine: Glenn Rosenbluth, MD, Daniel C. West, MD. Dorene Balmer, PhD, RD, Carol L. Carraccio, MD, MA, Laura Degnon, CAE, and David McDonald, and Alan Schwartz PhD serve the I‐PASS Study Group as part of the IIPE. Karen M. Wilson, MD, MPH serves the I‐PASS Study Group as part of the advisory board from the PRIS Executive Council. John Webster served the I‐PASS Study Group and Education Executive Committee as a representative from TeamSTEPPS.

Disclosures: The I‐PASS Study was primarily supported by the US Department of Health and Human Services, Office of the Assistant Secretary for Planning and Evaluation (1R18AE000029‐01). The opinions and conclusions expressed herein are solely those of the author(s) and should not be constructed as representing the opinions or policy of any agency of the federal government. Developed with input from the Initiative for Innovation in Pediatric Education and the Pediatric Research in Inpatient Settings Network (supported by the Children's Hospital Association, the Academic Pediatric Association, the American Academy of Pediatrics, and the Society of Hospital Medicine). A. J. S. was supported by the Agency for Healthcare Research and Quality/Oregon Comparative Effectiveness Research K12 Program (1K12HS019456‐01). Additional funding for the I‐PASS Study was provided by the Medical Research Foundation of Oregon, Physician Services Incorporated Foundation (Ontario, Canada), and Pfizer (unrestricted medical education grant to N.D.S.). C.P.L, A.J.S. were supported by the Oregon Comparative Effectiveness Research K12 Program (1K12HS019456 from the Agency for Healthcare Research and Quality). A.J.S. was also supported by the Medical Research Foundation of Oregon. The authors report no conflicts of interest.

References
  1. Patterson ES, Roth EM, Woods DD, Chow R, Gomes JO. Handoff strategies in settings with high consequences for failure: lessons for health care operations. Int J Qual Health Care. 2004;16(2):125132.
  2. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1(4):257266.
  3. Horwitz LI, Moin T, Green ML. Development and implementation of an oral sign‐out skills curriculum. J Gen Intern Med. 2007;22(10):14701474.
  4. Arora VM, Manjarrez E, Dressler DD, Basaviah P, Halasyamani L, Kripalani S. Hospitalist handoffs: a systematic review and task force recommendations. J Hosp Med. 2009;4(7):433440.
  5. Abraham J, Kannampallil T, Patel VL. A systematic review of the literature on the evaluation of handoff tools: implications for research and practice. J Am Med Inform Assoc. 2014;21(1):154162.
  6. Li P, Ali S, Tang C, Ghali WA, Stelfox HT. Review of computerized physician handoff tools for improving the quality of patient care. J Hosp Med. 2013;8(8):456463.
  7. McSweeney ME, Landrigan CP, Jiang H, Starmer A, Lightdale JR. Answering questions on call: pediatric resident physicians' use of handoffs and other resources. J Hosp Med. 2013;8(6):328333.
  8. Fogerty RL, Schoenfeld A, Salim Al‐Damluji M, Horwitz LI. Effectiveness of written hospitalist sign‐outs in answering overnight inquiries. J Hosp Med. 2013;8(11):609614.
  9. Schoenfeld AR, Salim Al‐Damluji M, Horwitz LI. Sign‐out snapshot: cross‐sectional evaluation of written sign‐outs among specialties. BMJ Qual Saf. 2014;23(1):6672.
  10. Bhabra G, Mackeith S, Monteiro P, Pothier DD. An experimental comparison of handover methods. Ann R Coll Surg Engl. 2007;89(3):298300.
  11. Pothier D, Monteiro P, Mooktiar M, Shaw A. Pilot study to show the loss of important data in nursing handover. Br J Nurs. 2005;14(20):10901093.
  12. The Joint Commission. Hospital Accreditation Standards 2015: Joint Commission Resources; 2015:PC.02.02.01.
  13. Accreditation Council for Graduate Medical Education. Common Program Requirements. 2013; http://acgme.org/acgmeweb/tabid/429/ProgramandInstitutionalAccreditation/CommonProgramRequirements.aspx. Accessed May 11, 2015.
  14. Sectish TC, Starmer AJ, Landrigan CP, Spector ND. Establishing a multisite education and research project requires leadership, expertise, collaboration, and an important aim. Pediatrics. 2010;126(4):619622.
  15. Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a handoff program. N Engl J Med. 2014;371(19):18031812.
  16. US Department of Health and Human Services. Agency for Healthcare Research and Quality. TeamSTEPPS website. Available at: http://teamstepps.ahrq.gov/. Accessed July 12, 2013.
  17. Starmer AJ, Spector ND, Srivastava R, Allen AD, Landrigan CP, Sectish TC. I‐PASS, a mnemonic to standardize verbal handoffs. Pediatrics. 2012;129(2):201204.
  18. Scholtes P, Joiner B, Streibel B. The Team Handbook. 3rd ed. Middleton, WI: Oriel STAT A MATRIX; 2010.
  19. ACC/AHA Task Force on Practice Guidelines. Methodology Manual and Policies From the ACCF/AHA Task Force on Practice Guidelines. Available at: http://my.americanheart.org/idc/groups/ahamah‐public/@wcm/@sop/documents/downloadable/ucm_319826.pdf. Published June 2010. Accessed January 11, 2015.
  20. Naessens JM, Campbell CR, Shah N, et al. Effect of illness severity and comorbidity on patient safety and adverse events. Am J Med Qual. 2012;27(1):4857.
  21. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168(16):17551760.
  22. Starmer AJ, Sectish TC, Simon DW, et al. Rates of medical errors and preventable adverse events among hospitalized children following implementation of a resident handoff bundle. JAMA. 2013;310(21):22622270.
  23. Arora V, Kao J, Lovinger D, Seiden SC, Meltzer D. Medication discrepancies in resident sign‐outs and their potential to harm. J Gen Intern Med. 2007;22(12):17511755.
  24. Payne CE, Stein JM, Leong T, Dressler DD. Avoiding handover fumbles: a controlled trial of a structured handover tool versus traditional handover methods. BMJ Qual Saf. 2012;21(11):925932.
  25. Petersen LA, Orav EJ, Teich JM, O'Neil AC, Brennan TA. Using a computerized sign‐out program to improve continuity of inpatient care and prevent adverse events. Jt Comm J Qual Improv. 1998;24(2):7787.
  26. Wayne JD, Tyagi R, Reinhardt G, et al. Simple standardized patient handoff system that increases accuracy and completeness. J Surg Educ. 2008;65(6):476485.
  27. Eaton EG, Horvath KD, Lober WB, Pellegrini CA. Organizing the transfer of patient care information: the development of a computerized resident sign‐out system. Surgery. 2004;136(1):513.
  28. Eaton EG, Horvath KD, Lober WB, Rossini AJ, Pellegrini CA. A randomized, controlled trial evaluating the impact of a computerized rounding and sign‐out system on continuity of care and resident work hours. J Am Coll Surg. 2005;200(4):538545.
  29. Clark CJ, Sindell SL, Koehler RP. Template for success: using a resident‐designed sign‐out template in the handover of patient care. J Surg Educ. 2011;68(1):5257.
  30. Boyd M, Cumin D, Lombard B, Torrie J, Civil N, Weller J. Read‐back improves information transfer in simulated clinical crises. BMJ Qual Saf. 2014;23(12):989993.
  31. Chang VY, Arora VM, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125(3):491496.
  32. Barenfanger J, Sautter RL, Lang DL, Collins SM, Hacek DM, Peterson LR. Improving patient safety by repeating (read‐back) telephone reports of critical information. Am J Clin Pathol. 2004;121(6):801803.
  33. Collins SA, Stein DM, Vawdrey DK, Stetson PD, Bakken S. Content overlap in nurse and physician handoff artifacts and the potential role of electronic health records: a systematic review. J Biomed Inform. 2011;44(4):704712.
  34. Laxmisan A, McCoy AB, Wright A, Sittig DF. Clinical summarization capabilities of commercially‐available and internally‐developed electronic health records. Appl Clin Inform. 2012;3(1):8093.
  35. Hunt S, Staggers N. An analysis and recommendations for multidisciplinary computerized handoff applications in hospitals. AMIA Annu Symp Proc. 2011;2011:588597.
References
  1. Patterson ES, Roth EM, Woods DD, Chow R, Gomes JO. Handoff strategies in settings with high consequences for failure: lessons for health care operations. Int J Qual Health Care. 2004;16(2):125132.
  2. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1(4):257266.
  3. Horwitz LI, Moin T, Green ML. Development and implementation of an oral sign‐out skills curriculum. J Gen Intern Med. 2007;22(10):14701474.
  4. Arora VM, Manjarrez E, Dressler DD, Basaviah P, Halasyamani L, Kripalani S. Hospitalist handoffs: a systematic review and task force recommendations. J Hosp Med. 2009;4(7):433440.
  5. Abraham J, Kannampallil T, Patel VL. A systematic review of the literature on the evaluation of handoff tools: implications for research and practice. J Am Med Inform Assoc. 2014;21(1):154162.
  6. Li P, Ali S, Tang C, Ghali WA, Stelfox HT. Review of computerized physician handoff tools for improving the quality of patient care. J Hosp Med. 2013;8(8):456463.
  7. McSweeney ME, Landrigan CP, Jiang H, Starmer A, Lightdale JR. Answering questions on call: pediatric resident physicians' use of handoffs and other resources. J Hosp Med. 2013;8(6):328333.
  8. Fogerty RL, Schoenfeld A, Salim Al‐Damluji M, Horwitz LI. Effectiveness of written hospitalist sign‐outs in answering overnight inquiries. J Hosp Med. 2013;8(11):609614.
  9. Schoenfeld AR, Salim Al‐Damluji M, Horwitz LI. Sign‐out snapshot: cross‐sectional evaluation of written sign‐outs among specialties. BMJ Qual Saf. 2014;23(1):6672.
  10. Bhabra G, Mackeith S, Monteiro P, Pothier DD. An experimental comparison of handover methods. Ann R Coll Surg Engl. 2007;89(3):298300.
  11. Pothier D, Monteiro P, Mooktiar M, Shaw A. Pilot study to show the loss of important data in nursing handover. Br J Nurs. 2005;14(20):10901093.
  12. The Joint Commission. Hospital Accreditation Standards 2015: Joint Commission Resources; 2015:PC.02.02.01.
  13. Accreditation Council for Graduate Medical Education. Common Program Requirements. 2013; http://acgme.org/acgmeweb/tabid/429/ProgramandInstitutionalAccreditation/CommonProgramRequirements.aspx. Accessed May 11, 2015.
  14. Sectish TC, Starmer AJ, Landrigan CP, Spector ND. Establishing a multisite education and research project requires leadership, expertise, collaboration, and an important aim. Pediatrics. 2010;126(4):619622.
  15. Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a handoff program. N Engl J Med. 2014;371(19):18031812.
  16. US Department of Health and Human Services. Agency for Healthcare Research and Quality. TeamSTEPPS website. Available at: http://teamstepps.ahrq.gov/. Accessed July 12, 2013.
  17. Starmer AJ, Spector ND, Srivastava R, Allen AD, Landrigan CP, Sectish TC. I‐PASS, a mnemonic to standardize verbal handoffs. Pediatrics. 2012;129(2):201204.
  18. Scholtes P, Joiner B, Streibel B. The Team Handbook. 3rd ed. Middleton, WI: Oriel STAT A MATRIX; 2010.
  19. ACC/AHA Task Force on Practice Guidelines. Methodology Manual and Policies From the ACCF/AHA Task Force on Practice Guidelines. Available at: http://my.americanheart.org/idc/groups/ahamah‐public/@wcm/@sop/documents/downloadable/ucm_319826.pdf. Published June 2010. Accessed January 11, 2015.
  20. Naessens JM, Campbell CR, Shah N, et al. Effect of illness severity and comorbidity on patient safety and adverse events. Am J Med Qual. 2012;27(1):4857.
  21. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168(16):17551760.
  22. Starmer AJ, Sectish TC, Simon DW, et al. Rates of medical errors and preventable adverse events among hospitalized children following implementation of a resident handoff bundle. JAMA. 2013;310(21):22622270.
  23. Arora V, Kao J, Lovinger D, Seiden SC, Meltzer D. Medication discrepancies in resident sign‐outs and their potential to harm. J Gen Intern Med. 2007;22(12):17511755.
  24. Payne CE, Stein JM, Leong T, Dressler DD. Avoiding handover fumbles: a controlled trial of a structured handover tool versus traditional handover methods. BMJ Qual Saf. 2012;21(11):925932.
  25. Petersen LA, Orav EJ, Teich JM, O'Neil AC, Brennan TA. Using a computerized sign‐out program to improve continuity of inpatient care and prevent adverse events. Jt Comm J Qual Improv. 1998;24(2):7787.
  26. Wayne JD, Tyagi R, Reinhardt G, et al. Simple standardized patient handoff system that increases accuracy and completeness. J Surg Educ. 2008;65(6):476485.
  27. Eaton EG, Horvath KD, Lober WB, Pellegrini CA. Organizing the transfer of patient care information: the development of a computerized resident sign‐out system. Surgery. 2004;136(1):513.
  28. Eaton EG, Horvath KD, Lober WB, Rossini AJ, Pellegrini CA. A randomized, controlled trial evaluating the impact of a computerized rounding and sign‐out system on continuity of care and resident work hours. J Am Coll Surg. 2005;200(4):538545.
  29. Clark CJ, Sindell SL, Koehler RP. Template for success: using a resident‐designed sign‐out template in the handover of patient care. J Surg Educ. 2011;68(1):5257.
  30. Boyd M, Cumin D, Lombard B, Torrie J, Civil N, Weller J. Read‐back improves information transfer in simulated clinical crises. BMJ Qual Saf. 2014;23(12):989993.
  31. Chang VY, Arora VM, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125(3):491496.
  32. Barenfanger J, Sautter RL, Lang DL, Collins SM, Hacek DM, Peterson LR. Improving patient safety by repeating (read‐back) telephone reports of critical information. Am J Clin Pathol. 2004;121(6):801803.
  33. Collins SA, Stein DM, Vawdrey DK, Stetson PD, Bakken S. Content overlap in nurse and physician handoff artifacts and the potential role of electronic health records: a systematic review. J Biomed Inform. 2011;44(4):704712.
  34. Laxmisan A, McCoy AB, Wright A, Sittig DF. Clinical summarization capabilities of commercially‐available and internally‐developed electronic health records. Appl Clin Inform. 2012;3(1):8093.
  35. Hunt S, Staggers N. An analysis and recommendations for multidisciplinary computerized handoff applications in hospitals. AMIA Annu Symp Proc. 2011;2011:588597.
Issue
Journal of Hospital Medicine - 10(8)
Issue
Journal of Hospital Medicine - 10(8)
Page Number
517-524
Page Number
517-524
Publications
Publications
Article Type
Display Headline
Variation in printed handoff documents: Results and recommendations from a multicenter needs assessment
Display Headline
Variation in printed handoff documents: Results and recommendations from a multicenter needs assessment
Sections
Article Source

© 2015 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Glenn Rosenbluth, MD, Department of Pediatrics, 550 16th Street, 5th Floor, San Francisco, CA 94143‐0110; Telephone: 415‐476‐9185; Fax: 415‐476‐4009; E‐mail: rosenbluthg@peds.ucsf.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

False Alarms and Patient Safety

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Crying wolf: False alarms and patient safety

Despite 15 years of national and local investment in improving the safety of hospital care, patient safety remains a leading problem in both adult and pediatric hospitals. A 2010 study found that 180,000 Medicare beneficiaries likely die each year due to harm suffered as a result of medical care,[1] a death toll surpassed only by deaths due to cardiovascular disease and cancer. Even though initial efforts in the field have shown great promise for stemming the tide of healthcare‐associated infections,[2] surgical errors,[3] handoff failures,[4] and errors in the care of adults hospitalized for myocardial infarction and congestive heart failure,[5] much work remains to be done.[6] The root causes of many adverse events are poorly understood and unaddressed. Resultant tragedies remain all too common.

In the current issue of the Journal of Hospital Medicine, Bonafide and colleagues report the results of an innovative observational pilot study designed to assess the role of an inadequately addressed root cause of serious errors: alarm fatigue.[7] Alarm fatigue is the phenomenon of desensitization to alarms, particularly in the context of excessive false alarms. In a videotaped observational assessment of nurse response times to 5070 alarms on a pediatric ward and intensive care unit (ICU), the authors found that nurses responded significantly more slowly as the number of nonactionable alarms in the preceding 2 hours increased. Although a substantial majority of these alarms were technically valid (ie, representing true deviations of vital signs outside of the normal range rather than sensor or equipment problems), the vast majority required no action to be takenapproximately 7 out of 8 in the ICU and an astonishing 99 out of 100 on the ward.

As any hospitalist, intensivist, or nurse knows well, alarms are rampant throughout hospitals. It is impossible to walk down any hallway on a busy hospital wardnever mind an ICUwithout seeing a flashing light or 2 above a doorway, and hearing the incessant beeping of oxygen saturation and cardiovascular/respiratory monitors, a thousand bits of technology forever crying wolf. The problem, of course, is that sometimes there really is a wolf, but it is hard to take the risk seriously when the false alarms happen not just twice before a true threat materializes, as in Aesop's fable, but 7 times in the ICU, or worse, 99 times in the setting where most hospitalists practice. Moreover, even when the threat is real, in most cases it is caught in time one way or another, and no lasting harm results.

So why not simply shut off the unremitting noise? In 1987, outside of Baltimore, Amtrak experienced what at the time was the deadliest rail crash in its history after 1 of its passenger trains collided with a Conrail freight train. A major root cause of the crash was that the crew on the freight train had placed duct tape over an annoying automated signal alarm.[8, 9] Tragically, on this particular day, the suppressed alarm was all too relevant. Identifying the real alarm, however, can be nearly impossible when it sounds the same as 100 irritating sounds constantly emanating from the environment. It is the challenge of identifying the needle in the haystack, after you have developed an allergy to the hay.

What then to do? More research like that conducted by Bonafide and colleagues is needed to better understand how healthcare providers respond to the onslaught of alarms they encounter, and to inform refinement of these systems. Understanding how alarm fatigue plays out in the context of different clinical settings, with different workloads, varying levels of distraction, and different rates of true and false‐positive alarms will be critical. Furthermore, understanding how individuals' physiologic fatigue, circadian misalignment, mood, stress, and cognitive state may play into alarm response is likewise essential, if we are to design appropriate alarm systems that function effectively in the busy 24‐hour environment of healthcare. Ongoing work suggests that smart alarms, using algorithms that integrate data from multiple vital sign readings over time, may reduce the frequency of false alarms and better identify clinically significant events.[10] Replacing existing range‐limit monitors with these types of smart alarms has the potential to greatly improve both the sensitivity and specificity of hospital alarms, but further work in this area is needed.

Ultimately, if we can better separate out the signal, we will be better poised to respond to the true emergencies that arise that are currently obscured by the ever‐present noise. Better trust in the alarm systems we have would help all of us focus our energies on the problems that matter most. Doing so, we could better care for our patients, and better identify the system failures that cause them harm in our hospitals.

Disclosures: Dr. Landrigan is supported in part by the Children's Hospital Association for his work as an Executive Council Member of the Pediatric Research in Inpatient Settings network. Dr. Landrigan serves as a consultant to Virgin Pulse regarding sleep, safety, and health. In addition, Dr. Landrigan has received monetary awards, honoraria, and travel reimbursement from multiple academic and professional organizations for delivering lectures on sleep deprivation, physician performance, handoffs, and patient safety, and has served as an expert witness in cases regarding patient safety.

References
  1. Office of the Inspector General. Adverse events in hospitals: national incidence among Medicare beneficiaries. OEI‐06‐09‐00090. Available at: https://oig.hhs.gov/oei/reports/oei‐06‐09‐00090.pdf. Published November 2010. Accessed February 27, 2015.
  2. Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter‐related bloodstream infections in the ICU. N Engl J Med. 2006;355:27252732.
  3. Haynes AB, Weiser TG, Berry WR, et al. A surgical safety checklist to reduce morbidity and mortality in a global population. N Engl J Med. 2009;360:491499.
  4. Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a resident handoff program. New Engl J Med. 2014;371:18031812.
  5. Wang Y, Eldridge N, Metersky ML, et al. National trends in patient safety for four common conditions, 2005–2011. N Engl J Med. 2014;370:341351.
  6. Landrigan CP, Parry G, Bones CB, et al. Temporal trends in rates of patient harm due to medical care. New Engl J Med. 2010;363:21242134.
  7. Bonafide CP, Lin R, Zander M, et al. Association between exposure to nonactionable physiologic monitor alarms and response time in a children's hospital. J Hosp Med. 2015;10(6):345351.
  8. Sorkin RD. Why are people turning off our alarms? J Acoust Soc Am. 1988;84:11071108.
  9. 1987 Maryland train collision. Wikipedia. Available at: http://en.wikipedia.org/wiki/1987_Maryland_train_collision. Accessed February 27, 2015.
  10. Siebig S, Kuhls S, Imhoff M, et al. Collection of annotated data in a clinical validation study for alarm algorithms in intensive care—a methodologic framework. J Crit Care. 2010;25:128135.
Article PDF
Issue
Journal of Hospital Medicine - 10(6)
Publications
Page Number
409-410
Sections
Article PDF
Article PDF

Despite 15 years of national and local investment in improving the safety of hospital care, patient safety remains a leading problem in both adult and pediatric hospitals. A 2010 study found that 180,000 Medicare beneficiaries likely die each year due to harm suffered as a result of medical care,[1] a death toll surpassed only by deaths due to cardiovascular disease and cancer. Even though initial efforts in the field have shown great promise for stemming the tide of healthcare‐associated infections,[2] surgical errors,[3] handoff failures,[4] and errors in the care of adults hospitalized for myocardial infarction and congestive heart failure,[5] much work remains to be done.[6] The root causes of many adverse events are poorly understood and unaddressed. Resultant tragedies remain all too common.

In the current issue of the Journal of Hospital Medicine, Bonafide and colleagues report the results of an innovative observational pilot study designed to assess the role of an inadequately addressed root cause of serious errors: alarm fatigue.[7] Alarm fatigue is the phenomenon of desensitization to alarms, particularly in the context of excessive false alarms. In a videotaped observational assessment of nurse response times to 5070 alarms on a pediatric ward and intensive care unit (ICU), the authors found that nurses responded significantly more slowly as the number of nonactionable alarms in the preceding 2 hours increased. Although a substantial majority of these alarms were technically valid (ie, representing true deviations of vital signs outside of the normal range rather than sensor or equipment problems), the vast majority required no action to be takenapproximately 7 out of 8 in the ICU and an astonishing 99 out of 100 on the ward.

As any hospitalist, intensivist, or nurse knows well, alarms are rampant throughout hospitals. It is impossible to walk down any hallway on a busy hospital wardnever mind an ICUwithout seeing a flashing light or 2 above a doorway, and hearing the incessant beeping of oxygen saturation and cardiovascular/respiratory monitors, a thousand bits of technology forever crying wolf. The problem, of course, is that sometimes there really is a wolf, but it is hard to take the risk seriously when the false alarms happen not just twice before a true threat materializes, as in Aesop's fable, but 7 times in the ICU, or worse, 99 times in the setting where most hospitalists practice. Moreover, even when the threat is real, in most cases it is caught in time one way or another, and no lasting harm results.

So why not simply shut off the unremitting noise? In 1987, outside of Baltimore, Amtrak experienced what at the time was the deadliest rail crash in its history after 1 of its passenger trains collided with a Conrail freight train. A major root cause of the crash was that the crew on the freight train had placed duct tape over an annoying automated signal alarm.[8, 9] Tragically, on this particular day, the suppressed alarm was all too relevant. Identifying the real alarm, however, can be nearly impossible when it sounds the same as 100 irritating sounds constantly emanating from the environment. It is the challenge of identifying the needle in the haystack, after you have developed an allergy to the hay.

What then to do? More research like that conducted by Bonafide and colleagues is needed to better understand how healthcare providers respond to the onslaught of alarms they encounter, and to inform refinement of these systems. Understanding how alarm fatigue plays out in the context of different clinical settings, with different workloads, varying levels of distraction, and different rates of true and false‐positive alarms will be critical. Furthermore, understanding how individuals' physiologic fatigue, circadian misalignment, mood, stress, and cognitive state may play into alarm response is likewise essential, if we are to design appropriate alarm systems that function effectively in the busy 24‐hour environment of healthcare. Ongoing work suggests that smart alarms, using algorithms that integrate data from multiple vital sign readings over time, may reduce the frequency of false alarms and better identify clinically significant events.[10] Replacing existing range‐limit monitors with these types of smart alarms has the potential to greatly improve both the sensitivity and specificity of hospital alarms, but further work in this area is needed.

Ultimately, if we can better separate out the signal, we will be better poised to respond to the true emergencies that arise that are currently obscured by the ever‐present noise. Better trust in the alarm systems we have would help all of us focus our energies on the problems that matter most. Doing so, we could better care for our patients, and better identify the system failures that cause them harm in our hospitals.

Disclosures: Dr. Landrigan is supported in part by the Children's Hospital Association for his work as an Executive Council Member of the Pediatric Research in Inpatient Settings network. Dr. Landrigan serves as a consultant to Virgin Pulse regarding sleep, safety, and health. In addition, Dr. Landrigan has received monetary awards, honoraria, and travel reimbursement from multiple academic and professional organizations for delivering lectures on sleep deprivation, physician performance, handoffs, and patient safety, and has served as an expert witness in cases regarding patient safety.

Despite 15 years of national and local investment in improving the safety of hospital care, patient safety remains a leading problem in both adult and pediatric hospitals. A 2010 study found that 180,000 Medicare beneficiaries likely die each year due to harm suffered as a result of medical care,[1] a death toll surpassed only by deaths due to cardiovascular disease and cancer. Even though initial efforts in the field have shown great promise for stemming the tide of healthcare‐associated infections,[2] surgical errors,[3] handoff failures,[4] and errors in the care of adults hospitalized for myocardial infarction and congestive heart failure,[5] much work remains to be done.[6] The root causes of many adverse events are poorly understood and unaddressed. Resultant tragedies remain all too common.

In the current issue of the Journal of Hospital Medicine, Bonafide and colleagues report the results of an innovative observational pilot study designed to assess the role of an inadequately addressed root cause of serious errors: alarm fatigue.[7] Alarm fatigue is the phenomenon of desensitization to alarms, particularly in the context of excessive false alarms. In a videotaped observational assessment of nurse response times to 5070 alarms on a pediatric ward and intensive care unit (ICU), the authors found that nurses responded significantly more slowly as the number of nonactionable alarms in the preceding 2 hours increased. Although a substantial majority of these alarms were technically valid (ie, representing true deviations of vital signs outside of the normal range rather than sensor or equipment problems), the vast majority required no action to be takenapproximately 7 out of 8 in the ICU and an astonishing 99 out of 100 on the ward.

As any hospitalist, intensivist, or nurse knows well, alarms are rampant throughout hospitals. It is impossible to walk down any hallway on a busy hospital wardnever mind an ICUwithout seeing a flashing light or 2 above a doorway, and hearing the incessant beeping of oxygen saturation and cardiovascular/respiratory monitors, a thousand bits of technology forever crying wolf. The problem, of course, is that sometimes there really is a wolf, but it is hard to take the risk seriously when the false alarms happen not just twice before a true threat materializes, as in Aesop's fable, but 7 times in the ICU, or worse, 99 times in the setting where most hospitalists practice. Moreover, even when the threat is real, in most cases it is caught in time one way or another, and no lasting harm results.

So why not simply shut off the unremitting noise? In 1987, outside of Baltimore, Amtrak experienced what at the time was the deadliest rail crash in its history after 1 of its passenger trains collided with a Conrail freight train. A major root cause of the crash was that the crew on the freight train had placed duct tape over an annoying automated signal alarm.[8, 9] Tragically, on this particular day, the suppressed alarm was all too relevant. Identifying the real alarm, however, can be nearly impossible when it sounds the same as 100 irritating sounds constantly emanating from the environment. It is the challenge of identifying the needle in the haystack, after you have developed an allergy to the hay.

What then to do? More research like that conducted by Bonafide and colleagues is needed to better understand how healthcare providers respond to the onslaught of alarms they encounter, and to inform refinement of these systems. Understanding how alarm fatigue plays out in the context of different clinical settings, with different workloads, varying levels of distraction, and different rates of true and false‐positive alarms will be critical. Furthermore, understanding how individuals' physiologic fatigue, circadian misalignment, mood, stress, and cognitive state may play into alarm response is likewise essential, if we are to design appropriate alarm systems that function effectively in the busy 24‐hour environment of healthcare. Ongoing work suggests that smart alarms, using algorithms that integrate data from multiple vital sign readings over time, may reduce the frequency of false alarms and better identify clinically significant events.[10] Replacing existing range‐limit monitors with these types of smart alarms has the potential to greatly improve both the sensitivity and specificity of hospital alarms, but further work in this area is needed.

Ultimately, if we can better separate out the signal, we will be better poised to respond to the true emergencies that arise that are currently obscured by the ever‐present noise. Better trust in the alarm systems we have would help all of us focus our energies on the problems that matter most. Doing so, we could better care for our patients, and better identify the system failures that cause them harm in our hospitals.

Disclosures: Dr. Landrigan is supported in part by the Children's Hospital Association for his work as an Executive Council Member of the Pediatric Research in Inpatient Settings network. Dr. Landrigan serves as a consultant to Virgin Pulse regarding sleep, safety, and health. In addition, Dr. Landrigan has received monetary awards, honoraria, and travel reimbursement from multiple academic and professional organizations for delivering lectures on sleep deprivation, physician performance, handoffs, and patient safety, and has served as an expert witness in cases regarding patient safety.

References
  1. Office of the Inspector General. Adverse events in hospitals: national incidence among Medicare beneficiaries. OEI‐06‐09‐00090. Available at: https://oig.hhs.gov/oei/reports/oei‐06‐09‐00090.pdf. Published November 2010. Accessed February 27, 2015.
  2. Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter‐related bloodstream infections in the ICU. N Engl J Med. 2006;355:27252732.
  3. Haynes AB, Weiser TG, Berry WR, et al. A surgical safety checklist to reduce morbidity and mortality in a global population. N Engl J Med. 2009;360:491499.
  4. Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a resident handoff program. New Engl J Med. 2014;371:18031812.
  5. Wang Y, Eldridge N, Metersky ML, et al. National trends in patient safety for four common conditions, 2005–2011. N Engl J Med. 2014;370:341351.
  6. Landrigan CP, Parry G, Bones CB, et al. Temporal trends in rates of patient harm due to medical care. New Engl J Med. 2010;363:21242134.
  7. Bonafide CP, Lin R, Zander M, et al. Association between exposure to nonactionable physiologic monitor alarms and response time in a children's hospital. J Hosp Med. 2015;10(6):345351.
  8. Sorkin RD. Why are people turning off our alarms? J Acoust Soc Am. 1988;84:11071108.
  9. 1987 Maryland train collision. Wikipedia. Available at: http://en.wikipedia.org/wiki/1987_Maryland_train_collision. Accessed February 27, 2015.
  10. Siebig S, Kuhls S, Imhoff M, et al. Collection of annotated data in a clinical validation study for alarm algorithms in intensive care—a methodologic framework. J Crit Care. 2010;25:128135.
References
  1. Office of the Inspector General. Adverse events in hospitals: national incidence among Medicare beneficiaries. OEI‐06‐09‐00090. Available at: https://oig.hhs.gov/oei/reports/oei‐06‐09‐00090.pdf. Published November 2010. Accessed February 27, 2015.
  2. Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter‐related bloodstream infections in the ICU. N Engl J Med. 2006;355:27252732.
  3. Haynes AB, Weiser TG, Berry WR, et al. A surgical safety checklist to reduce morbidity and mortality in a global population. N Engl J Med. 2009;360:491499.
  4. Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a resident handoff program. New Engl J Med. 2014;371:18031812.
  5. Wang Y, Eldridge N, Metersky ML, et al. National trends in patient safety for four common conditions, 2005–2011. N Engl J Med. 2014;370:341351.
  6. Landrigan CP, Parry G, Bones CB, et al. Temporal trends in rates of patient harm due to medical care. New Engl J Med. 2010;363:21242134.
  7. Bonafide CP, Lin R, Zander M, et al. Association between exposure to nonactionable physiologic monitor alarms and response time in a children's hospital. J Hosp Med. 2015;10(6):345351.
  8. Sorkin RD. Why are people turning off our alarms? J Acoust Soc Am. 1988;84:11071108.
  9. 1987 Maryland train collision. Wikipedia. Available at: http://en.wikipedia.org/wiki/1987_Maryland_train_collision. Accessed February 27, 2015.
  10. Siebig S, Kuhls S, Imhoff M, et al. Collection of annotated data in a clinical validation study for alarm algorithms in intensive care—a methodologic framework. J Crit Care. 2010;25:128135.
Issue
Journal of Hospital Medicine - 10(6)
Issue
Journal of Hospital Medicine - 10(6)
Page Number
409-410
Page Number
409-410
Publications
Publications
Article Type
Display Headline
Crying wolf: False alarms and patient safety
Display Headline
Crying wolf: False alarms and patient safety
Sections
Article Source
© 2015 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Christopher P. Landrigan, MD, Division of General Pediatrics, Boston Children's Hospital, 300 Longwood Avenue, Enders 1, Boston, MA 02115; Telephone: 617‐355‐2568; Fax: 617‐732‐4015; E‐mail: clandrigan@partners.org
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media

Resident Use of Handoff Information

Article Type
Changed
Sun, 05/21/2017 - 18:14
Display Headline
Answering questions on call: Pediatric resident physicians' use of handoffs and other resources

Hospital communication failures are a leading cause of serious errors and adverse events in the United States.[1, 2, 3, 4] With the implementation of duty‐hour restrictions for resident physicians,[5] there has been particular focus on the transfer of information during handoffs at change of shift.[6, 7] Many residency programs have sought to improve the processes of written and verbal handoffs through various initiatives, including: (1) automated linkage of handoff forms to electronic medical records (EMRs)[8, 9, 10]; (2) introduction of oral communication curricula, handoff simulation, or mnemonics[11, 12, 13]; and (3) faculty oversight of housestaff handoffs.[14, 15] Underlying each initiative has been the assumption that improving written and verbal handoff processes will ensure the availability of optimal patient information for on‐call housestaff. There has been little investigation, however, into what clinical questions are actually being asked of on‐call trainees, as well as what sources of information they are using to provide answers.

The aim of our study was to examine the extent to which written and verbal handoffs are utilized by pediatric trainees to derive answers to questions posed during overnight shifts. We also sought to describe both the frequency and types of on‐call questions being asked of trainees. Our primary outcome was trainee use of written handoffs to answer on‐call questions. Secondary outcomes included trainee use of verbal handoffs, as well as their use of alternative information resources to answer on‐call questions, including other clinical staff (ie, attending physicians, senior residents, nursing staff), patients and their families, the medical record, or the Internet. We then examined a variety of trainee, patient, and question characteristics to assess potential predictors of written and verbal handoff use.

METHODS

Institutional approval was granted to prospectively observe pediatric interns at the start of their overnight on‐call shifts on 2 inpatient wards at Boston Children's Hospital during 3 winter months (November through January). Our study was conducted during the postintervention period of a larger study that was designed to examine the effectiveness of a new resident handoff bundle on resident workflow and patient safety.[13] Interns rotating on study ward 1 used a structured, nonautomated tool (Microsoft Word version 2003; Microsoft Corp., Redmond, WA). Interns on study ward 2 used a handoff tool that was developed at the study hospital for use with the hospital's EMR, Cerner PowerChart version 2007.17 (Cerner Corp., Kansas City, MO). Interns on both wards received training on specific communication strategies, including verbal and written handoff processes.[13]

For our study, we recorded all questions being asked of on‐call interns by patients, parents, or other family members, as well as nurses or other clinical providers after completion of their evening handoff. We then directly observed all information resources used to derive answers to any questions asked pertaining to patients discussed in the evening handoff. We excluded any questions about new patient admissions or transfers, as well as nonpatient‐related questions.

Both study wards were staffed by separate day and night housestaff teams, who worked shifts of 12 to 14 hours in duration and had similar nursing schedules. The day team consisted of 3 interns and 1 senior resident per ward. The night team consisted of 1 intern on each ward, supervised by a senior resident covering both wards. Each day intern rotated for 1 week (Sunday through Thursday) during their month‐long ward rotation as part of the night team. We considered any intern on either of the 2 study wards to be eligible for enrollment in this study. Written consent was obtained from all participants.

The night intern received a verbal and written handoff at the shift change (usually performed between 5 and 7pm) from 1 of the departing day interns prior to the start of the observation period. This handoff was conducted face‐to‐face in a ward conference room typically with the on‐call night intern and supervising resident receiving the handoff together from the departing day intern/senior resident.

Observation Protocol

Data collection was conducted by an independent, board‐certified, pediatric physician observer on alternating weeknights immediately after the day‐to‐night evening handoff had taken place. A strict observation protocol was followed. When an eligible question was asked of the participating intern, the physician observer would record the question and the time. The question source, defined as a nurse, parent/patient, or other clinical staff (eg, pharmacist, consultant) was documented, as well as the mode of questioning, defined as face to face, text page, or phone call.

The observer would then note if and when the question was answered. Once the question was answered, the observer would ask the intern if he or she had used the written handoff to provide the answer (yes or no). Our primary outcome was reported use of the written handoff. In addition, the observer directly noted if the intern looked at the written handoff tool at any time when answering a question. The intern was also asked to name any and all additional information resources used, including verbal handoff, senior resident, nursing staff, other clinicians, a patient/parent or other family member, a patient's physical exam, the EMR, the Internet, or his or her own medical or clinical knowledge.

All question and answer information was tracked using a handheld, digital, time device. In addition, the following patient data were recorded for each patient involved in a recorded question: the patient's admitting service, transfer status, and length of stay.

Data Categorization and Analysis

Content of recorded questions were categorized according to whether they involved: (1) medications (including drug allergies or levels), (2) diet or fluids, (3) laboratory values or diagnostic testing/procedures, (4) physical exam findings (eg, a distended abdomen, blood pressure, height/weight), or (5) general care‐plan questions. We also categorized time used for generating an answer as immediate (<5 minutes), delayed (>5 minutes but <1.5 hours), or deferred (any question unanswered during the time of observation).

All data were entered into a database using SPSS 16.0 Data Builder software (SPSS Inc., Chicago, IL), and statistical analyses were performed with PASW 18 (SPSS Inc.) and SAS 9.2 (SAS Institute Inc., Cary, NC) software. Observed questions were summarized according to content categories. We also described trainee and patient characteristics relevant to the questions being studied. To study risk factors for written handoff use, the outcome was dichotomized as reported use of written handoff by the intern as a resource to answer the question asked versus written handoff use was not reported by the intern as a resource to answer the question asked. We did not include observed use of the written handoff in these statistical analyses. To accommodate for patient‐ or provider‐induced correlations among observed questions, we used a generalized estimation equations approach (PROC GENMOD in SAS 9.2) to fit logistic regression models for written handoff use and permitted a nested correlation structure among the questions (ie, questions from the same patient were allowed to be correlated, and patients under the care of the same intern could have intern‐induced correlation). Univariate regression modeling was used to evaluate the effects of question, patient, and intern characteristics. Multivariate logistic regression models were used to identify independent risk factors for written handoff use. Any variable that had a P value 0.1 in univariate regression model was considered as a candidate variable in the multivariate regression model. We then used a backward elimination approach to obtain the final model, which only included variables remaining to be significant at a P<0.05 significance level. Our analysis for verbal handoff use was carried out in a similar fashion.

RESULTS

Twenty‐eight observation nights (equivalent to 77 hours and 6 minutes of total direct observation time), consisting of 13 sessions on study ward 1 and 15 sessions on study ward 2, were completed. A total of 15 first‐year pediatric interns (5 male, 33%; 10 female, 66.7%), with a median age of 27.5 years (interquartile range [IQR]: 2629 years) participated. Interns on the 2 study wards were comparable with regard to trainee week of service (P=0.43) and consecutive night of call at the time of observation (P=0.45). Each intern was observed for a mean of 2 sessions (range, 13 sessions), with a mean observation time per session of approximately 2 hours and 45 minutes ( 23 minutes).

Questions

A total of 260 questions (ward 1: 136 questions, ward 2: 124 questions) met inclusion criteria and involved 101 different patients, with a median of 2 questions/patient (IQR: 13) and a range of 1 to 14 questions/patient. Overall, interns were asked 2.6 questions/hour (IQR: 1.44.7), with a range of 0 to 7 questions per hour; the great majority of questions (210 [82%]) were posed face to face. Types of questions recorded included medications 28% (73), diet/fluids 15% (39), laboratory or diagnostic/procedural related 22% (57), physical exam or other measurements 8.5% (22), or other general medical or patient care‐plan questions 26.5% (69) (Table 1). Examples of recorded questions are provided in Table 2.

Patient, Question, and Answer Characteristics
 No. (%)
  • NOTE: Abbreviations: CCS, complex care service. *Patients' inpatient length of stay means time (in days) between admission date and night of recorded question. Interns' week of service and consecutive night means time (in weeks or days, respectively) between interns' ward rotation start date and night of observation. Clinical provider means nursing staff, referring pediatrician, pharmacist, or other clinical provider. Other resources includes general medical/clinical knowledge, the electronic medical record, parents' report, other clinicians' report (ie, senior resident, nursing staff), Internet.

Patients, n=101 
Admitting services 
General pediatrics49 (48)
Pediatric subspecialty27 (27)
CCS*25 (25)
Patients transferred from critical care unit 
Yes21 (21)
No80 (79)
Questions, n=260 
Patients' length of stay at time of recorded question* 
2 days142 (55)
>2 days118 (45)
Intern consecutive night shift (15) 
1st or 2nd night (early)86 (33)
3rd through 5th night (late)174 (67)
Intern week of service during a 4‐week rotation 
Weeks 12 (early)119 (46)
Weeks 34 (late)141 (54)
Question sources 
Clinical provider167 (64)
Parent/patient or other family member93 (36)
Question categories 
Medications73 (28)
Diet and/or fluids39 (15)
Labs or diagnostic imaging/procedures57 (22)
Physical exam/vital signs/measurements22 (8.5)
Other general medical or patient care plan questions69 (26.5)
Answers, n=233 
Resources reported 
Written sign‐out17 (7.3)
Verbal sign‐out (excluding any written sign‐out use)59 (25.3)
Other resources157 (67.4)
Question Examples by Category
Question Categories
  • NOTE: Abbreviations: AM, morning; NG, nasogastric; NPO, nothing by mouth.

Medication questions (including medication allergy or drug level questions)
Could you clarify the lasix orders?
Pharmacy rejected the medication, what do you want to do?
Dietary and fluid questions
Do you want to continue NG feeds at 10 mL/hr and advance?
Is she going to need to be NPO for the biopsy in the AM?
Laboratory or diagnostic tests/procedure questions
Do you want blood cultures on this patient?
What was the result of her x‐ray?
Physical exam questions (including height/weight or vital sign measurements)
What do you think of my back (site of biopsy)?
Is my back okay, because it seems sore after the (renal) biopsy?
Other (patient related) general medical or care plan questions
Did you talk with urology about their recommendations?
Do you know the plan for tomorrow?

Across the 2 study wards, 48% (49) of patients involved in questions were admitted to a general pediatric service; 27% (27) were admitted to a pediatric specialty service (including the genetics/metabolism, endocrinology, adolescent medicine, pulmonary, or toxicology admitting services); the remaining 25% (25) were admitted to a complex care service (CCS), specifically designed for patients with multisystem genetic, neurological, or congenital disorders (Table 1).[16, 17] Approximately 21% (21) of patients had been transferred to the floor from a critical care unit (Table 1).

Answers

Of the 260 recorded questions, 90% (233) had documented answers. For the 10% (27) of questions with undocumented answers, 21 were observed to be verbally deferred by the intern to the day team or another care provider (ie, other physician or nurse), and almost half (42.9% [9]) involved general care‐plan questions; the remainder involved medication (4), diet (2), diagnostic testing (5), or vital sign (1) questions. An additional 6 questions went unanswered during the observation period, and it is unknown if or when they were answered.

Of the answered questions, 90% (209) of answers were provided by trainees within 5 minutes and 9% (21) within 1.5 hours. In all, interns reported using 1 information resource to provide answers for 61% (142) of questions, at least 2 resources for 33% (76) questions, and 3 resources for 6% (15) questions.

Across both study wards, interns reported using information provided in written or verbal handoffs to answer 32.6% of questions. Interns reported using the written handoff, either alone or in combination with other information resources, to provide answers for 7.3% (17) of questions; verbal handoff, either alone or in combination with another resource (excluding written handoff), was reported as a resource for 25.3% (59) of questions. Of note, interns were directly observed to look at the written handoff when answering 21% (49) of questions.

A variety of other resources, including general medical/clinical knowledge, the EMR, and parents or other resources, were used to answer the remaining 67.4% (157) of questions. Intern general medical knowledge (ie, reports of simply knowing the answer to the question in their head[s]) was used to provide answers for 53.2% (124) of questions asked.

Unadjusted univariate regression analyses assessing predictors of written and verbal handoff use are shown in Figure 1. Multivariate logistic regression analyses showed that both dietary questions (odds ratio [OR]: 3.64, 95% confidence interval [CI]: 1.518.76; P=0.004) and interns' consecutive call night (OR: 0.29, 95% CI: 0.090.93; P=0.04) remained significant predictors of written handoff use. After adjusting for risk factors identified above, no differences in written handoff use were seen between the 2 wards.

Figure 1
Univariate predictors of written and verbal handoff use. Physical exam/measurement questions are not displayed in this graph as they were not associated with written or verbal handoff use. Abbreviations: CI, confidence interval; ICU, intensive care unit. *P < 0.05 = significant univariate predictor of written handoff use. **P < 0.05 = significant univariate predictor of verbal handoff use.

Multivariate logistic regression for predictors of the verbal handoff use showed that questions regarding patients with longer lengths of stay (OR: 1.97, 95% CI: 1.023.8; P=0.04), those regarding general care plans (OR: 2.07, 95% CI: 1.133.78; P=0.02), as well as those asked by clinical staff (OR: 1.95, 95 CI: 1.043.66; P=0.04), remained significant predictors of reported verbal handoff use.

DISCUSSION

In light of the recent changes in duty hours implemented in July 2011, many pediatric training programs are having trainees work in day and night shifts.[18] Pediatric resident physicians frequently answer questions that pertain to patients handed off between day and night shifts. We found that on average, information provided in the verbal and written handoff was used almost once per hour. Housestaff in our study generally based their answers on information found in 1 or 2 resources, with almost one‐third of all questions involving some use of the written or verbal handoff. Prior research has documented widespread problems with resident handoff practices across programs and a high rate of medical errors due to miscommunications.[3, 4, 19, 20] Given how often information contained within the handoff was used as interns went about their nightly tasks, it is not difficult to understand how errors or omissions in the handoff process may potentially translate into frequent problems in direct patient care.

Trainees reported using written handoff tools to provide answers for 7.3% of questions. As we had suspected, they relied less frequently on their written handoffs as they completed more consecutive call nights. Interestingly, however, even when housestaff did not report using the written handoff, they were observed quite often to look at it before providing an answer. One explanation for this discrepancy between trainee reports and our observations is that the written handoff may serve as a memory tool, even if housestaff do not directly attribute their answers to its content. Our study also found that answers to questions concerning patients' diet and fluids were more likely to be ascribed to information contained in the written handoff. This finding supports the potential value of automated written handoff tools that are linked to the EMR, which can best ensure accuracy of this type of information.

Housestaff in our study also reported using information received during the verbal handoff to answer 1 out of every 4 on‐call questions. Although we did not specifically rate or monitor the quality of verbal handoffs, prior research has demonstrated that resident verbal handoff is often plagued with incomplete and inaccurate data.[3, 4, 19, 21] One investigation found that pediatric interns were prone to overestimating the effectiveness of their verbal handoffs, even as they failed to convey urgent information to their peers.[19] In light of such prior work, our finding that interns frequently rely on the verbal transfer of information supports specific residency training program handoff initiatives that target verbal exchanges.[11, 22, 23]

Although information obtained in the handoff was frequently required by on‐call housestaff, our study found that two‐thirds of all questions were answered using other resources, most often general medical or clinical knowledge. Clearly, background knowledge and experience is fundamental to trainees' ability to perform their jobs. Such reliance on general knowledge for problem solving may not be unique to interns. One recent observational study of senior pediatric cardiac subspecialists reported a high frequency of reliance on their own clinical experience, instinct, or prior training in making clinical decisions.[24] Further investigation may be useful to parse out the exact types of clinical knowledge being used, and may have important implications for how training programs plan for overnight supervision.[25, 26, 27]

Our study has several limitations. First, it was beyond the scope of this study to link housestaff answers to patient outcomes or medical errors. Given the frequency with which the handoff, a known source of vulnerability to medical error, was used by on‐call housestaff, our study suggests that future research evaluating the relationship between questions asked of on‐call housestaff, the answers provided, and downstream patient safety incidents may be merited. Second, our study was conducted in a single pediatric residency program with 1 physician observer midway through the first year of training and only in the early evening hours. This limits the generalizability of our findings, as the use of handoffs to answer on‐call questions may be different at other stages of the training process, within other specialties, or even at different times of the day. We also began our observations after the handoff had taken place; future studies may want to assess how variations in written and verbal handoff processes affect their use. As a final limitation, we note that although collecting information in real time using a direct observational method eliminated the problem of recall bias, there may have been attribution bias.

The results of our study demonstrate that on‐call pediatric housestaff are frequently asked a variety of clinical questions posed by hospital staff, patients, and their families. We found that trainees are apt to rely both on handoff information and other resources to provide answers. By better understanding what resources on‐call housestaff are accessing to answer questions overnight, we may be able to better target interventions needed to improve the availability of patient information, as well as the usefulness of written and verbal handoff tools.[11, 22, 23]

Acknowledgments

The authors thank Katharine Levinson, MD, and Melissa Atmadja, BA, for their help with the data review and guidance with database management. The authors also thank the housestaff from the Boston Combined Residency Program in Pediatrics for their participation in this study.

Disclosures: Maireade E. McSweeney, MD, as the responsible author certifies that all coauthors have seen and agree with the contents of this article, takes responsibility for the accuracy of these data, and certifies that this information is not under review by any other publication. All authors had no financial conflicts of interest or conflicts of interest relevant to this article to disclose. Dr. Landrigan is supported in part by the Children's Hospital Association for his work as an Executive Council member of the Pediatric Research in Inpatient Settings network. In addition, he has received honoraria from the Committee of Interns and Residents as well as multiple academic medical centers for lectures delivered on handoffs, sleep deprivation, and patient safety, and he has served as an expert witness in cases regarding patient safety and sleep deprivation.

Files
References
  1. Improving America's hospitals: The Joint Commission's annual report on quality and safety. 2007. Available at: http://www.jointcommission. org/Improving_Americas_Hospitals_The_Joint_Commissions_Annual_Report_on_Quality_and_Safety_‐_2007. Accessed October 3, 2011.
  2. US Department of Health and Human Services, Office of Inspector General. Adverse events in hospitals: methods for identifying events. 2010. Available at: http://oig.hhs.gov/oei/reports/oei‐06‐08‐00221.pdf. Accessed October 3, 2011.
  3. Arora V, Johnson J, Lovinger D, Humphrey HJ, Meltzer DO. Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Health Care. 2005;14:401407.
  4. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168:17551760.
  5. Accreditation Council for Graduate Medical Education. Common program requirements. 2010. Available at: http://acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed January 25, 2011.
  6. Volpp KG, Landrigan CP. Building physician work hour regulations from first principles and best evidence. JAMA. 2008;300:11971199.
  7. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1:257266.
  8. Eaton EG, Horvath KD, Lober WB, Rossini AJ, Pellegrini CA. A randomized, controlled trial evaluating the impact of a computerized rounding and sign‐out system on continuity of care and resident work hours. J Am Coll Surg. 2005;200:538545.
  9. Wayne J TR, Reinhardt G, Rooney D, Makoul G, Chopra S, DaRosa D. Simple standardized patient handoff system that increases accuracy and completeness. J Surg Educ. 2008;65:476485.
  10. Li P, Ali S, Tang C, Ghali WA, Stelfox HT. Review of computerized physician handoff tools for improving the quality of patient care [published online ahead of print November 20, 2012]. J Hosp Med. doi: 10.1002/jhm.1988.
  11. Sectish TC, Starmer AJ, Landrigan CP, Spector ND. Establishing a multisite education and research project requires leadership, expertise, collaboration, and an important aim. Pediatrics. 2010;126:619622.
  12. Farnan JM, Paro JA, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2009;25:129134.
  13. Starmer AJ, Spector ND, Srivastava R, Allen AD, Landrigan CP, Sectish TC. I‐pass, a mnemonic to standardize verbal handoffs. Pediatrics. 2012;129:201204.
  14. Chu ES, Reid M, Schulz T, et al. A structured handoff program for interns. Acad Med. 2009;84:347352.
  15. Nabors C, Peterson SJ, Lee WN, et al. Experience with faculty supervision of an electronic resident sign‐out system. Am J Med. 2010;123:376381.
  16. Berry JG, Hall DE, Kuo DZ, et al. Hospital utilization and characteristics of patients experiencing recurrent readmissions within children's hospitals. JAMA. 2011;305:682690.
  17. Simon TD, Berry J, Feudtner C, et al. Children with complex chronic conditions in inpatient hospital settings in the United States. Pediatrics. 2010;126:647655.
  18. Chua KP, Gordon MB, Sectish T, Landrigan CP. Effects of a night‐team system on resident sleep and work hours. Pediatrics. 2011;128:11421147.
  19. Chang VY AV, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125:491496.
  20. McSweeney ME, Lightdale JR, Vinci RJ, Moses J. Patient handoffs: pediatric resident experiences and lessons learned. Clin Pediatr (Phila). 2011;50:5763.
  21. Borowitz SM, Waggoner‐Fountain LA, Bass EJ, Sledd RM. Adequacy of information transferred at resident sign‐out (in‐hospital handover of care): a prospective survey. Qual Saf Health Care. 2008;17:610.
  22. Arora V, Johnson J. A model for building a standardized hand‐off protocol. Jt Comm J Qual Patient Saf. 2006;32:646655.
  23. Horwitz LI, Moin T, Green ML. Development and implementation of an oral sign‐out skills curriculum. J Gen Intern Med. 2007;22:14701474.
  24. Darst JR, Newburger JW, Resch S, Rathod RH, Lock JE. Deciding without data. Congenit Heart Dis. 2010;5:339342.
  25. Farnan JM, Petty LA, Georgitis E, et al. A systematic review: the effect of clinical supervision on patient and residency education outcomes. Acad Med. 2012;87:428442.
  26. Haber LA, Lau CY, Sharpe BA, Arora VM, Farnan JM, Ranji SR. Effects of increased overnight supervision on resident education, decision‐making, and autonomy. J Hosp Med. 2012;7:606610.
  27. Farnan JM, Burger A, Boonayasai RT, et al. Survey of overnight academic hospitalist supervision of trainees. J Hosp Med. 2012;7:521523.
Article PDF
Issue
Journal of Hospital Medicine - 8(6)
Publications
Page Number
328-333
Sections
Files
Files
Article PDF
Article PDF

Hospital communication failures are a leading cause of serious errors and adverse events in the United States.[1, 2, 3, 4] With the implementation of duty‐hour restrictions for resident physicians,[5] there has been particular focus on the transfer of information during handoffs at change of shift.[6, 7] Many residency programs have sought to improve the processes of written and verbal handoffs through various initiatives, including: (1) automated linkage of handoff forms to electronic medical records (EMRs)[8, 9, 10]; (2) introduction of oral communication curricula, handoff simulation, or mnemonics[11, 12, 13]; and (3) faculty oversight of housestaff handoffs.[14, 15] Underlying each initiative has been the assumption that improving written and verbal handoff processes will ensure the availability of optimal patient information for on‐call housestaff. There has been little investigation, however, into what clinical questions are actually being asked of on‐call trainees, as well as what sources of information they are using to provide answers.

The aim of our study was to examine the extent to which written and verbal handoffs are utilized by pediatric trainees to derive answers to questions posed during overnight shifts. We also sought to describe both the frequency and types of on‐call questions being asked of trainees. Our primary outcome was trainee use of written handoffs to answer on‐call questions. Secondary outcomes included trainee use of verbal handoffs, as well as their use of alternative information resources to answer on‐call questions, including other clinical staff (ie, attending physicians, senior residents, nursing staff), patients and their families, the medical record, or the Internet. We then examined a variety of trainee, patient, and question characteristics to assess potential predictors of written and verbal handoff use.

METHODS

Institutional approval was granted to prospectively observe pediatric interns at the start of their overnight on‐call shifts on 2 inpatient wards at Boston Children's Hospital during 3 winter months (November through January). Our study was conducted during the postintervention period of a larger study that was designed to examine the effectiveness of a new resident handoff bundle on resident workflow and patient safety.[13] Interns rotating on study ward 1 used a structured, nonautomated tool (Microsoft Word version 2003; Microsoft Corp., Redmond, WA). Interns on study ward 2 used a handoff tool that was developed at the study hospital for use with the hospital's EMR, Cerner PowerChart version 2007.17 (Cerner Corp., Kansas City, MO). Interns on both wards received training on specific communication strategies, including verbal and written handoff processes.[13]

For our study, we recorded all questions being asked of on‐call interns by patients, parents, or other family members, as well as nurses or other clinical providers after completion of their evening handoff. We then directly observed all information resources used to derive answers to any questions asked pertaining to patients discussed in the evening handoff. We excluded any questions about new patient admissions or transfers, as well as nonpatient‐related questions.

Both study wards were staffed by separate day and night housestaff teams, who worked shifts of 12 to 14 hours in duration and had similar nursing schedules. The day team consisted of 3 interns and 1 senior resident per ward. The night team consisted of 1 intern on each ward, supervised by a senior resident covering both wards. Each day intern rotated for 1 week (Sunday through Thursday) during their month‐long ward rotation as part of the night team. We considered any intern on either of the 2 study wards to be eligible for enrollment in this study. Written consent was obtained from all participants.

The night intern received a verbal and written handoff at the shift change (usually performed between 5 and 7pm) from 1 of the departing day interns prior to the start of the observation period. This handoff was conducted face‐to‐face in a ward conference room typically with the on‐call night intern and supervising resident receiving the handoff together from the departing day intern/senior resident.

Observation Protocol

Data collection was conducted by an independent, board‐certified, pediatric physician observer on alternating weeknights immediately after the day‐to‐night evening handoff had taken place. A strict observation protocol was followed. When an eligible question was asked of the participating intern, the physician observer would record the question and the time. The question source, defined as a nurse, parent/patient, or other clinical staff (eg, pharmacist, consultant) was documented, as well as the mode of questioning, defined as face to face, text page, or phone call.

The observer would then note if and when the question was answered. Once the question was answered, the observer would ask the intern if he or she had used the written handoff to provide the answer (yes or no). Our primary outcome was reported use of the written handoff. In addition, the observer directly noted if the intern looked at the written handoff tool at any time when answering a question. The intern was also asked to name any and all additional information resources used, including verbal handoff, senior resident, nursing staff, other clinicians, a patient/parent or other family member, a patient's physical exam, the EMR, the Internet, or his or her own medical or clinical knowledge.

All question and answer information was tracked using a handheld, digital, time device. In addition, the following patient data were recorded for each patient involved in a recorded question: the patient's admitting service, transfer status, and length of stay.

Data Categorization and Analysis

Content of recorded questions were categorized according to whether they involved: (1) medications (including drug allergies or levels), (2) diet or fluids, (3) laboratory values or diagnostic testing/procedures, (4) physical exam findings (eg, a distended abdomen, blood pressure, height/weight), or (5) general care‐plan questions. We also categorized time used for generating an answer as immediate (<5 minutes), delayed (>5 minutes but <1.5 hours), or deferred (any question unanswered during the time of observation).

All data were entered into a database using SPSS 16.0 Data Builder software (SPSS Inc., Chicago, IL), and statistical analyses were performed with PASW 18 (SPSS Inc.) and SAS 9.2 (SAS Institute Inc., Cary, NC) software. Observed questions were summarized according to content categories. We also described trainee and patient characteristics relevant to the questions being studied. To study risk factors for written handoff use, the outcome was dichotomized as reported use of written handoff by the intern as a resource to answer the question asked versus written handoff use was not reported by the intern as a resource to answer the question asked. We did not include observed use of the written handoff in these statistical analyses. To accommodate for patient‐ or provider‐induced correlations among observed questions, we used a generalized estimation equations approach (PROC GENMOD in SAS 9.2) to fit logistic regression models for written handoff use and permitted a nested correlation structure among the questions (ie, questions from the same patient were allowed to be correlated, and patients under the care of the same intern could have intern‐induced correlation). Univariate regression modeling was used to evaluate the effects of question, patient, and intern characteristics. Multivariate logistic regression models were used to identify independent risk factors for written handoff use. Any variable that had a P value 0.1 in univariate regression model was considered as a candidate variable in the multivariate regression model. We then used a backward elimination approach to obtain the final model, which only included variables remaining to be significant at a P<0.05 significance level. Our analysis for verbal handoff use was carried out in a similar fashion.

RESULTS

Twenty‐eight observation nights (equivalent to 77 hours and 6 minutes of total direct observation time), consisting of 13 sessions on study ward 1 and 15 sessions on study ward 2, were completed. A total of 15 first‐year pediatric interns (5 male, 33%; 10 female, 66.7%), with a median age of 27.5 years (interquartile range [IQR]: 2629 years) participated. Interns on the 2 study wards were comparable with regard to trainee week of service (P=0.43) and consecutive night of call at the time of observation (P=0.45). Each intern was observed for a mean of 2 sessions (range, 13 sessions), with a mean observation time per session of approximately 2 hours and 45 minutes ( 23 minutes).

Questions

A total of 260 questions (ward 1: 136 questions, ward 2: 124 questions) met inclusion criteria and involved 101 different patients, with a median of 2 questions/patient (IQR: 13) and a range of 1 to 14 questions/patient. Overall, interns were asked 2.6 questions/hour (IQR: 1.44.7), with a range of 0 to 7 questions per hour; the great majority of questions (210 [82%]) were posed face to face. Types of questions recorded included medications 28% (73), diet/fluids 15% (39), laboratory or diagnostic/procedural related 22% (57), physical exam or other measurements 8.5% (22), or other general medical or patient care‐plan questions 26.5% (69) (Table 1). Examples of recorded questions are provided in Table 2.

Patient, Question, and Answer Characteristics
 No. (%)
  • NOTE: Abbreviations: CCS, complex care service. *Patients' inpatient length of stay means time (in days) between admission date and night of recorded question. Interns' week of service and consecutive night means time (in weeks or days, respectively) between interns' ward rotation start date and night of observation. Clinical provider means nursing staff, referring pediatrician, pharmacist, or other clinical provider. Other resources includes general medical/clinical knowledge, the electronic medical record, parents' report, other clinicians' report (ie, senior resident, nursing staff), Internet.

Patients, n=101 
Admitting services 
General pediatrics49 (48)
Pediatric subspecialty27 (27)
CCS*25 (25)
Patients transferred from critical care unit 
Yes21 (21)
No80 (79)
Questions, n=260 
Patients' length of stay at time of recorded question* 
2 days142 (55)
>2 days118 (45)
Intern consecutive night shift (15) 
1st or 2nd night (early)86 (33)
3rd through 5th night (late)174 (67)
Intern week of service during a 4‐week rotation 
Weeks 12 (early)119 (46)
Weeks 34 (late)141 (54)
Question sources 
Clinical provider167 (64)
Parent/patient or other family member93 (36)
Question categories 
Medications73 (28)
Diet and/or fluids39 (15)
Labs or diagnostic imaging/procedures57 (22)
Physical exam/vital signs/measurements22 (8.5)
Other general medical or patient care plan questions69 (26.5)
Answers, n=233 
Resources reported 
Written sign‐out17 (7.3)
Verbal sign‐out (excluding any written sign‐out use)59 (25.3)
Other resources157 (67.4)
Question Examples by Category
Question Categories
  • NOTE: Abbreviations: AM, morning; NG, nasogastric; NPO, nothing by mouth.

Medication questions (including medication allergy or drug level questions)
Could you clarify the lasix orders?
Pharmacy rejected the medication, what do you want to do?
Dietary and fluid questions
Do you want to continue NG feeds at 10 mL/hr and advance?
Is she going to need to be NPO for the biopsy in the AM?
Laboratory or diagnostic tests/procedure questions
Do you want blood cultures on this patient?
What was the result of her x‐ray?
Physical exam questions (including height/weight or vital sign measurements)
What do you think of my back (site of biopsy)?
Is my back okay, because it seems sore after the (renal) biopsy?
Other (patient related) general medical or care plan questions
Did you talk with urology about their recommendations?
Do you know the plan for tomorrow?

Across the 2 study wards, 48% (49) of patients involved in questions were admitted to a general pediatric service; 27% (27) were admitted to a pediatric specialty service (including the genetics/metabolism, endocrinology, adolescent medicine, pulmonary, or toxicology admitting services); the remaining 25% (25) were admitted to a complex care service (CCS), specifically designed for patients with multisystem genetic, neurological, or congenital disorders (Table 1).[16, 17] Approximately 21% (21) of patients had been transferred to the floor from a critical care unit (Table 1).

Answers

Of the 260 recorded questions, 90% (233) had documented answers. For the 10% (27) of questions with undocumented answers, 21 were observed to be verbally deferred by the intern to the day team or another care provider (ie, other physician or nurse), and almost half (42.9% [9]) involved general care‐plan questions; the remainder involved medication (4), diet (2), diagnostic testing (5), or vital sign (1) questions. An additional 6 questions went unanswered during the observation period, and it is unknown if or when they were answered.

Of the answered questions, 90% (209) of answers were provided by trainees within 5 minutes and 9% (21) within 1.5 hours. In all, interns reported using 1 information resource to provide answers for 61% (142) of questions, at least 2 resources for 33% (76) questions, and 3 resources for 6% (15) questions.

Across both study wards, interns reported using information provided in written or verbal handoffs to answer 32.6% of questions. Interns reported using the written handoff, either alone or in combination with other information resources, to provide answers for 7.3% (17) of questions; verbal handoff, either alone or in combination with another resource (excluding written handoff), was reported as a resource for 25.3% (59) of questions. Of note, interns were directly observed to look at the written handoff when answering 21% (49) of questions.

A variety of other resources, including general medical/clinical knowledge, the EMR, and parents or other resources, were used to answer the remaining 67.4% (157) of questions. Intern general medical knowledge (ie, reports of simply knowing the answer to the question in their head[s]) was used to provide answers for 53.2% (124) of questions asked.

Unadjusted univariate regression analyses assessing predictors of written and verbal handoff use are shown in Figure 1. Multivariate logistic regression analyses showed that both dietary questions (odds ratio [OR]: 3.64, 95% confidence interval [CI]: 1.518.76; P=0.004) and interns' consecutive call night (OR: 0.29, 95% CI: 0.090.93; P=0.04) remained significant predictors of written handoff use. After adjusting for risk factors identified above, no differences in written handoff use were seen between the 2 wards.

Figure 1
Univariate predictors of written and verbal handoff use. Physical exam/measurement questions are not displayed in this graph as they were not associated with written or verbal handoff use. Abbreviations: CI, confidence interval; ICU, intensive care unit. *P < 0.05 = significant univariate predictor of written handoff use. **P < 0.05 = significant univariate predictor of verbal handoff use.

Multivariate logistic regression for predictors of the verbal handoff use showed that questions regarding patients with longer lengths of stay (OR: 1.97, 95% CI: 1.023.8; P=0.04), those regarding general care plans (OR: 2.07, 95% CI: 1.133.78; P=0.02), as well as those asked by clinical staff (OR: 1.95, 95 CI: 1.043.66; P=0.04), remained significant predictors of reported verbal handoff use.

DISCUSSION

In light of the recent changes in duty hours implemented in July 2011, many pediatric training programs are having trainees work in day and night shifts.[18] Pediatric resident physicians frequently answer questions that pertain to patients handed off between day and night shifts. We found that on average, information provided in the verbal and written handoff was used almost once per hour. Housestaff in our study generally based their answers on information found in 1 or 2 resources, with almost one‐third of all questions involving some use of the written or verbal handoff. Prior research has documented widespread problems with resident handoff practices across programs and a high rate of medical errors due to miscommunications.[3, 4, 19, 20] Given how often information contained within the handoff was used as interns went about their nightly tasks, it is not difficult to understand how errors or omissions in the handoff process may potentially translate into frequent problems in direct patient care.

Trainees reported using written handoff tools to provide answers for 7.3% of questions. As we had suspected, they relied less frequently on their written handoffs as they completed more consecutive call nights. Interestingly, however, even when housestaff did not report using the written handoff, they were observed quite often to look at it before providing an answer. One explanation for this discrepancy between trainee reports and our observations is that the written handoff may serve as a memory tool, even if housestaff do not directly attribute their answers to its content. Our study also found that answers to questions concerning patients' diet and fluids were more likely to be ascribed to information contained in the written handoff. This finding supports the potential value of automated written handoff tools that are linked to the EMR, which can best ensure accuracy of this type of information.

Housestaff in our study also reported using information received during the verbal handoff to answer 1 out of every 4 on‐call questions. Although we did not specifically rate or monitor the quality of verbal handoffs, prior research has demonstrated that resident verbal handoff is often plagued with incomplete and inaccurate data.[3, 4, 19, 21] One investigation found that pediatric interns were prone to overestimating the effectiveness of their verbal handoffs, even as they failed to convey urgent information to their peers.[19] In light of such prior work, our finding that interns frequently rely on the verbal transfer of information supports specific residency training program handoff initiatives that target verbal exchanges.[11, 22, 23]

Although information obtained in the handoff was frequently required by on‐call housestaff, our study found that two‐thirds of all questions were answered using other resources, most often general medical or clinical knowledge. Clearly, background knowledge and experience is fundamental to trainees' ability to perform their jobs. Such reliance on general knowledge for problem solving may not be unique to interns. One recent observational study of senior pediatric cardiac subspecialists reported a high frequency of reliance on their own clinical experience, instinct, or prior training in making clinical decisions.[24] Further investigation may be useful to parse out the exact types of clinical knowledge being used, and may have important implications for how training programs plan for overnight supervision.[25, 26, 27]

Our study has several limitations. First, it was beyond the scope of this study to link housestaff answers to patient outcomes or medical errors. Given the frequency with which the handoff, a known source of vulnerability to medical error, was used by on‐call housestaff, our study suggests that future research evaluating the relationship between questions asked of on‐call housestaff, the answers provided, and downstream patient safety incidents may be merited. Second, our study was conducted in a single pediatric residency program with 1 physician observer midway through the first year of training and only in the early evening hours. This limits the generalizability of our findings, as the use of handoffs to answer on‐call questions may be different at other stages of the training process, within other specialties, or even at different times of the day. We also began our observations after the handoff had taken place; future studies may want to assess how variations in written and verbal handoff processes affect their use. As a final limitation, we note that although collecting information in real time using a direct observational method eliminated the problem of recall bias, there may have been attribution bias.

The results of our study demonstrate that on‐call pediatric housestaff are frequently asked a variety of clinical questions posed by hospital staff, patients, and their families. We found that trainees are apt to rely both on handoff information and other resources to provide answers. By better understanding what resources on‐call housestaff are accessing to answer questions overnight, we may be able to better target interventions needed to improve the availability of patient information, as well as the usefulness of written and verbal handoff tools.[11, 22, 23]

Acknowledgments

The authors thank Katharine Levinson, MD, and Melissa Atmadja, BA, for their help with the data review and guidance with database management. The authors also thank the housestaff from the Boston Combined Residency Program in Pediatrics for their participation in this study.

Disclosures: Maireade E. McSweeney, MD, as the responsible author certifies that all coauthors have seen and agree with the contents of this article, takes responsibility for the accuracy of these data, and certifies that this information is not under review by any other publication. All authors had no financial conflicts of interest or conflicts of interest relevant to this article to disclose. Dr. Landrigan is supported in part by the Children's Hospital Association for his work as an Executive Council member of the Pediatric Research in Inpatient Settings network. In addition, he has received honoraria from the Committee of Interns and Residents as well as multiple academic medical centers for lectures delivered on handoffs, sleep deprivation, and patient safety, and he has served as an expert witness in cases regarding patient safety and sleep deprivation.

Hospital communication failures are a leading cause of serious errors and adverse events in the United States.[1, 2, 3, 4] With the implementation of duty‐hour restrictions for resident physicians,[5] there has been particular focus on the transfer of information during handoffs at change of shift.[6, 7] Many residency programs have sought to improve the processes of written and verbal handoffs through various initiatives, including: (1) automated linkage of handoff forms to electronic medical records (EMRs)[8, 9, 10]; (2) introduction of oral communication curricula, handoff simulation, or mnemonics[11, 12, 13]; and (3) faculty oversight of housestaff handoffs.[14, 15] Underlying each initiative has been the assumption that improving written and verbal handoff processes will ensure the availability of optimal patient information for on‐call housestaff. There has been little investigation, however, into what clinical questions are actually being asked of on‐call trainees, as well as what sources of information they are using to provide answers.

The aim of our study was to examine the extent to which written and verbal handoffs are utilized by pediatric trainees to derive answers to questions posed during overnight shifts. We also sought to describe both the frequency and types of on‐call questions being asked of trainees. Our primary outcome was trainee use of written handoffs to answer on‐call questions. Secondary outcomes included trainee use of verbal handoffs, as well as their use of alternative information resources to answer on‐call questions, including other clinical staff (ie, attending physicians, senior residents, nursing staff), patients and their families, the medical record, or the Internet. We then examined a variety of trainee, patient, and question characteristics to assess potential predictors of written and verbal handoff use.

METHODS

Institutional approval was granted to prospectively observe pediatric interns at the start of their overnight on‐call shifts on 2 inpatient wards at Boston Children's Hospital during 3 winter months (November through January). Our study was conducted during the postintervention period of a larger study that was designed to examine the effectiveness of a new resident handoff bundle on resident workflow and patient safety.[13] Interns rotating on study ward 1 used a structured, nonautomated tool (Microsoft Word version 2003; Microsoft Corp., Redmond, WA). Interns on study ward 2 used a handoff tool that was developed at the study hospital for use with the hospital's EMR, Cerner PowerChart version 2007.17 (Cerner Corp., Kansas City, MO). Interns on both wards received training on specific communication strategies, including verbal and written handoff processes.[13]

For our study, we recorded all questions being asked of on‐call interns by patients, parents, or other family members, as well as nurses or other clinical providers after completion of their evening handoff. We then directly observed all information resources used to derive answers to any questions asked pertaining to patients discussed in the evening handoff. We excluded any questions about new patient admissions or transfers, as well as nonpatient‐related questions.

Both study wards were staffed by separate day and night housestaff teams, who worked shifts of 12 to 14 hours in duration and had similar nursing schedules. The day team consisted of 3 interns and 1 senior resident per ward. The night team consisted of 1 intern on each ward, supervised by a senior resident covering both wards. Each day intern rotated for 1 week (Sunday through Thursday) during their month‐long ward rotation as part of the night team. We considered any intern on either of the 2 study wards to be eligible for enrollment in this study. Written consent was obtained from all participants.

The night intern received a verbal and written handoff at the shift change (usually performed between 5 and 7pm) from 1 of the departing day interns prior to the start of the observation period. This handoff was conducted face‐to‐face in a ward conference room typically with the on‐call night intern and supervising resident receiving the handoff together from the departing day intern/senior resident.

Observation Protocol

Data collection was conducted by an independent, board‐certified, pediatric physician observer on alternating weeknights immediately after the day‐to‐night evening handoff had taken place. A strict observation protocol was followed. When an eligible question was asked of the participating intern, the physician observer would record the question and the time. The question source, defined as a nurse, parent/patient, or other clinical staff (eg, pharmacist, consultant) was documented, as well as the mode of questioning, defined as face to face, text page, or phone call.

The observer would then note if and when the question was answered. Once the question was answered, the observer would ask the intern if he or she had used the written handoff to provide the answer (yes or no). Our primary outcome was reported use of the written handoff. In addition, the observer directly noted if the intern looked at the written handoff tool at any time when answering a question. The intern was also asked to name any and all additional information resources used, including verbal handoff, senior resident, nursing staff, other clinicians, a patient/parent or other family member, a patient's physical exam, the EMR, the Internet, or his or her own medical or clinical knowledge.

All question and answer information was tracked using a handheld, digital, time device. In addition, the following patient data were recorded for each patient involved in a recorded question: the patient's admitting service, transfer status, and length of stay.

Data Categorization and Analysis

Content of recorded questions were categorized according to whether they involved: (1) medications (including drug allergies or levels), (2) diet or fluids, (3) laboratory values or diagnostic testing/procedures, (4) physical exam findings (eg, a distended abdomen, blood pressure, height/weight), or (5) general care‐plan questions. We also categorized time used for generating an answer as immediate (<5 minutes), delayed (>5 minutes but <1.5 hours), or deferred (any question unanswered during the time of observation).

All data were entered into a database using SPSS 16.0 Data Builder software (SPSS Inc., Chicago, IL), and statistical analyses were performed with PASW 18 (SPSS Inc.) and SAS 9.2 (SAS Institute Inc., Cary, NC) software. Observed questions were summarized according to content categories. We also described trainee and patient characteristics relevant to the questions being studied. To study risk factors for written handoff use, the outcome was dichotomized as reported use of written handoff by the intern as a resource to answer the question asked versus written handoff use was not reported by the intern as a resource to answer the question asked. We did not include observed use of the written handoff in these statistical analyses. To accommodate for patient‐ or provider‐induced correlations among observed questions, we used a generalized estimation equations approach (PROC GENMOD in SAS 9.2) to fit logistic regression models for written handoff use and permitted a nested correlation structure among the questions (ie, questions from the same patient were allowed to be correlated, and patients under the care of the same intern could have intern‐induced correlation). Univariate regression modeling was used to evaluate the effects of question, patient, and intern characteristics. Multivariate logistic regression models were used to identify independent risk factors for written handoff use. Any variable that had a P value 0.1 in univariate regression model was considered as a candidate variable in the multivariate regression model. We then used a backward elimination approach to obtain the final model, which only included variables remaining to be significant at a P<0.05 significance level. Our analysis for verbal handoff use was carried out in a similar fashion.

RESULTS

Twenty‐eight observation nights (equivalent to 77 hours and 6 minutes of total direct observation time), consisting of 13 sessions on study ward 1 and 15 sessions on study ward 2, were completed. A total of 15 first‐year pediatric interns (5 male, 33%; 10 female, 66.7%), with a median age of 27.5 years (interquartile range [IQR]: 2629 years) participated. Interns on the 2 study wards were comparable with regard to trainee week of service (P=0.43) and consecutive night of call at the time of observation (P=0.45). Each intern was observed for a mean of 2 sessions (range, 13 sessions), with a mean observation time per session of approximately 2 hours and 45 minutes ( 23 minutes).

Questions

A total of 260 questions (ward 1: 136 questions, ward 2: 124 questions) met inclusion criteria and involved 101 different patients, with a median of 2 questions/patient (IQR: 13) and a range of 1 to 14 questions/patient. Overall, interns were asked 2.6 questions/hour (IQR: 1.44.7), with a range of 0 to 7 questions per hour; the great majority of questions (210 [82%]) were posed face to face. Types of questions recorded included medications 28% (73), diet/fluids 15% (39), laboratory or diagnostic/procedural related 22% (57), physical exam or other measurements 8.5% (22), or other general medical or patient care‐plan questions 26.5% (69) (Table 1). Examples of recorded questions are provided in Table 2.

Patient, Question, and Answer Characteristics
 No. (%)
  • NOTE: Abbreviations: CCS, complex care service. *Patients' inpatient length of stay means time (in days) between admission date and night of recorded question. Interns' week of service and consecutive night means time (in weeks or days, respectively) between interns' ward rotation start date and night of observation. Clinical provider means nursing staff, referring pediatrician, pharmacist, or other clinical provider. Other resources includes general medical/clinical knowledge, the electronic medical record, parents' report, other clinicians' report (ie, senior resident, nursing staff), Internet.

Patients, n=101 
Admitting services 
General pediatrics49 (48)
Pediatric subspecialty27 (27)
CCS*25 (25)
Patients transferred from critical care unit 
Yes21 (21)
No80 (79)
Questions, n=260 
Patients' length of stay at time of recorded question* 
2 days142 (55)
>2 days118 (45)
Intern consecutive night shift (15) 
1st or 2nd night (early)86 (33)
3rd through 5th night (late)174 (67)
Intern week of service during a 4‐week rotation 
Weeks 12 (early)119 (46)
Weeks 34 (late)141 (54)
Question sources 
Clinical provider167 (64)
Parent/patient or other family member93 (36)
Question categories 
Medications73 (28)
Diet and/or fluids39 (15)
Labs or diagnostic imaging/procedures57 (22)
Physical exam/vital signs/measurements22 (8.5)
Other general medical or patient care plan questions69 (26.5)
Answers, n=233 
Resources reported 
Written sign‐out17 (7.3)
Verbal sign‐out (excluding any written sign‐out use)59 (25.3)
Other resources157 (67.4)
Question Examples by Category
Question Categories
  • NOTE: Abbreviations: AM, morning; NG, nasogastric; NPO, nothing by mouth.

Medication questions (including medication allergy or drug level questions)
Could you clarify the lasix orders?
Pharmacy rejected the medication, what do you want to do?
Dietary and fluid questions
Do you want to continue NG feeds at 10 mL/hr and advance?
Is she going to need to be NPO for the biopsy in the AM?
Laboratory or diagnostic tests/procedure questions
Do you want blood cultures on this patient?
What was the result of her x‐ray?
Physical exam questions (including height/weight or vital sign measurements)
What do you think of my back (site of biopsy)?
Is my back okay, because it seems sore after the (renal) biopsy?
Other (patient related) general medical or care plan questions
Did you talk with urology about their recommendations?
Do you know the plan for tomorrow?

Across the 2 study wards, 48% (49) of patients involved in questions were admitted to a general pediatric service; 27% (27) were admitted to a pediatric specialty service (including the genetics/metabolism, endocrinology, adolescent medicine, pulmonary, or toxicology admitting services); the remaining 25% (25) were admitted to a complex care service (CCS), specifically designed for patients with multisystem genetic, neurological, or congenital disorders (Table 1).[16, 17] Approximately 21% (21) of patients had been transferred to the floor from a critical care unit (Table 1).

Answers

Of the 260 recorded questions, 90% (233) had documented answers. For the 10% (27) of questions with undocumented answers, 21 were observed to be verbally deferred by the intern to the day team or another care provider (ie, other physician or nurse), and almost half (42.9% [9]) involved general care‐plan questions; the remainder involved medication (4), diet (2), diagnostic testing (5), or vital sign (1) questions. An additional 6 questions went unanswered during the observation period, and it is unknown if or when they were answered.

Of the answered questions, 90% (209) of answers were provided by trainees within 5 minutes and 9% (21) within 1.5 hours. In all, interns reported using 1 information resource to provide answers for 61% (142) of questions, at least 2 resources for 33% (76) questions, and 3 resources for 6% (15) questions.

Across both study wards, interns reported using information provided in written or verbal handoffs to answer 32.6% of questions. Interns reported using the written handoff, either alone or in combination with other information resources, to provide answers for 7.3% (17) of questions; verbal handoff, either alone or in combination with another resource (excluding written handoff), was reported as a resource for 25.3% (59) of questions. Of note, interns were directly observed to look at the written handoff when answering 21% (49) of questions.

A variety of other resources, including general medical/clinical knowledge, the EMR, and parents or other resources, were used to answer the remaining 67.4% (157) of questions. Intern general medical knowledge (ie, reports of simply knowing the answer to the question in their head[s]) was used to provide answers for 53.2% (124) of questions asked.

Unadjusted univariate regression analyses assessing predictors of written and verbal handoff use are shown in Figure 1. Multivariate logistic regression analyses showed that both dietary questions (odds ratio [OR]: 3.64, 95% confidence interval [CI]: 1.518.76; P=0.004) and interns' consecutive call night (OR: 0.29, 95% CI: 0.090.93; P=0.04) remained significant predictors of written handoff use. After adjusting for risk factors identified above, no differences in written handoff use were seen between the 2 wards.

Figure 1
Univariate predictors of written and verbal handoff use. Physical exam/measurement questions are not displayed in this graph as they were not associated with written or verbal handoff use. Abbreviations: CI, confidence interval; ICU, intensive care unit. *P < 0.05 = significant univariate predictor of written handoff use. **P < 0.05 = significant univariate predictor of verbal handoff use.

Multivariate logistic regression for predictors of the verbal handoff use showed that questions regarding patients with longer lengths of stay (OR: 1.97, 95% CI: 1.023.8; P=0.04), those regarding general care plans (OR: 2.07, 95% CI: 1.133.78; P=0.02), as well as those asked by clinical staff (OR: 1.95, 95 CI: 1.043.66; P=0.04), remained significant predictors of reported verbal handoff use.

DISCUSSION

In light of the recent changes in duty hours implemented in July 2011, many pediatric training programs are having trainees work in day and night shifts.[18] Pediatric resident physicians frequently answer questions that pertain to patients handed off between day and night shifts. We found that on average, information provided in the verbal and written handoff was used almost once per hour. Housestaff in our study generally based their answers on information found in 1 or 2 resources, with almost one‐third of all questions involving some use of the written or verbal handoff. Prior research has documented widespread problems with resident handoff practices across programs and a high rate of medical errors due to miscommunications.[3, 4, 19, 20] Given how often information contained within the handoff was used as interns went about their nightly tasks, it is not difficult to understand how errors or omissions in the handoff process may potentially translate into frequent problems in direct patient care.

Trainees reported using written handoff tools to provide answers for 7.3% of questions. As we had suspected, they relied less frequently on their written handoffs as they completed more consecutive call nights. Interestingly, however, even when housestaff did not report using the written handoff, they were observed quite often to look at it before providing an answer. One explanation for this discrepancy between trainee reports and our observations is that the written handoff may serve as a memory tool, even if housestaff do not directly attribute their answers to its content. Our study also found that answers to questions concerning patients' diet and fluids were more likely to be ascribed to information contained in the written handoff. This finding supports the potential value of automated written handoff tools that are linked to the EMR, which can best ensure accuracy of this type of information.

Housestaff in our study also reported using information received during the verbal handoff to answer 1 out of every 4 on‐call questions. Although we did not specifically rate or monitor the quality of verbal handoffs, prior research has demonstrated that resident verbal handoff is often plagued with incomplete and inaccurate data.[3, 4, 19, 21] One investigation found that pediatric interns were prone to overestimating the effectiveness of their verbal handoffs, even as they failed to convey urgent information to their peers.[19] In light of such prior work, our finding that interns frequently rely on the verbal transfer of information supports specific residency training program handoff initiatives that target verbal exchanges.[11, 22, 23]

Although information obtained in the handoff was frequently required by on‐call housestaff, our study found that two‐thirds of all questions were answered using other resources, most often general medical or clinical knowledge. Clearly, background knowledge and experience is fundamental to trainees' ability to perform their jobs. Such reliance on general knowledge for problem solving may not be unique to interns. One recent observational study of senior pediatric cardiac subspecialists reported a high frequency of reliance on their own clinical experience, instinct, or prior training in making clinical decisions.[24] Further investigation may be useful to parse out the exact types of clinical knowledge being used, and may have important implications for how training programs plan for overnight supervision.[25, 26, 27]

Our study has several limitations. First, it was beyond the scope of this study to link housestaff answers to patient outcomes or medical errors. Given the frequency with which the handoff, a known source of vulnerability to medical error, was used by on‐call housestaff, our study suggests that future research evaluating the relationship between questions asked of on‐call housestaff, the answers provided, and downstream patient safety incidents may be merited. Second, our study was conducted in a single pediatric residency program with 1 physician observer midway through the first year of training and only in the early evening hours. This limits the generalizability of our findings, as the use of handoffs to answer on‐call questions may be different at other stages of the training process, within other specialties, or even at different times of the day. We also began our observations after the handoff had taken place; future studies may want to assess how variations in written and verbal handoff processes affect their use. As a final limitation, we note that although collecting information in real time using a direct observational method eliminated the problem of recall bias, there may have been attribution bias.

The results of our study demonstrate that on‐call pediatric housestaff are frequently asked a variety of clinical questions posed by hospital staff, patients, and their families. We found that trainees are apt to rely both on handoff information and other resources to provide answers. By better understanding what resources on‐call housestaff are accessing to answer questions overnight, we may be able to better target interventions needed to improve the availability of patient information, as well as the usefulness of written and verbal handoff tools.[11, 22, 23]

Acknowledgments

The authors thank Katharine Levinson, MD, and Melissa Atmadja, BA, for their help with the data review and guidance with database management. The authors also thank the housestaff from the Boston Combined Residency Program in Pediatrics for their participation in this study.

Disclosures: Maireade E. McSweeney, MD, as the responsible author certifies that all coauthors have seen and agree with the contents of this article, takes responsibility for the accuracy of these data, and certifies that this information is not under review by any other publication. All authors had no financial conflicts of interest or conflicts of interest relevant to this article to disclose. Dr. Landrigan is supported in part by the Children's Hospital Association for his work as an Executive Council member of the Pediatric Research in Inpatient Settings network. In addition, he has received honoraria from the Committee of Interns and Residents as well as multiple academic medical centers for lectures delivered on handoffs, sleep deprivation, and patient safety, and he has served as an expert witness in cases regarding patient safety and sleep deprivation.

References
  1. Improving America's hospitals: The Joint Commission's annual report on quality and safety. 2007. Available at: http://www.jointcommission. org/Improving_Americas_Hospitals_The_Joint_Commissions_Annual_Report_on_Quality_and_Safety_‐_2007. Accessed October 3, 2011.
  2. US Department of Health and Human Services, Office of Inspector General. Adverse events in hospitals: methods for identifying events. 2010. Available at: http://oig.hhs.gov/oei/reports/oei‐06‐08‐00221.pdf. Accessed October 3, 2011.
  3. Arora V, Johnson J, Lovinger D, Humphrey HJ, Meltzer DO. Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Health Care. 2005;14:401407.
  4. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168:17551760.
  5. Accreditation Council for Graduate Medical Education. Common program requirements. 2010. Available at: http://acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed January 25, 2011.
  6. Volpp KG, Landrigan CP. Building physician work hour regulations from first principles and best evidence. JAMA. 2008;300:11971199.
  7. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1:257266.
  8. Eaton EG, Horvath KD, Lober WB, Rossini AJ, Pellegrini CA. A randomized, controlled trial evaluating the impact of a computerized rounding and sign‐out system on continuity of care and resident work hours. J Am Coll Surg. 2005;200:538545.
  9. Wayne J TR, Reinhardt G, Rooney D, Makoul G, Chopra S, DaRosa D. Simple standardized patient handoff system that increases accuracy and completeness. J Surg Educ. 2008;65:476485.
  10. Li P, Ali S, Tang C, Ghali WA, Stelfox HT. Review of computerized physician handoff tools for improving the quality of patient care [published online ahead of print November 20, 2012]. J Hosp Med. doi: 10.1002/jhm.1988.
  11. Sectish TC, Starmer AJ, Landrigan CP, Spector ND. Establishing a multisite education and research project requires leadership, expertise, collaboration, and an important aim. Pediatrics. 2010;126:619622.
  12. Farnan JM, Paro JA, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2009;25:129134.
  13. Starmer AJ, Spector ND, Srivastava R, Allen AD, Landrigan CP, Sectish TC. I‐pass, a mnemonic to standardize verbal handoffs. Pediatrics. 2012;129:201204.
  14. Chu ES, Reid M, Schulz T, et al. A structured handoff program for interns. Acad Med. 2009;84:347352.
  15. Nabors C, Peterson SJ, Lee WN, et al. Experience with faculty supervision of an electronic resident sign‐out system. Am J Med. 2010;123:376381.
  16. Berry JG, Hall DE, Kuo DZ, et al. Hospital utilization and characteristics of patients experiencing recurrent readmissions within children's hospitals. JAMA. 2011;305:682690.
  17. Simon TD, Berry J, Feudtner C, et al. Children with complex chronic conditions in inpatient hospital settings in the United States. Pediatrics. 2010;126:647655.
  18. Chua KP, Gordon MB, Sectish T, Landrigan CP. Effects of a night‐team system on resident sleep and work hours. Pediatrics. 2011;128:11421147.
  19. Chang VY AV, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125:491496.
  20. McSweeney ME, Lightdale JR, Vinci RJ, Moses J. Patient handoffs: pediatric resident experiences and lessons learned. Clin Pediatr (Phila). 2011;50:5763.
  21. Borowitz SM, Waggoner‐Fountain LA, Bass EJ, Sledd RM. Adequacy of information transferred at resident sign‐out (in‐hospital handover of care): a prospective survey. Qual Saf Health Care. 2008;17:610.
  22. Arora V, Johnson J. A model for building a standardized hand‐off protocol. Jt Comm J Qual Patient Saf. 2006;32:646655.
  23. Horwitz LI, Moin T, Green ML. Development and implementation of an oral sign‐out skills curriculum. J Gen Intern Med. 2007;22:14701474.
  24. Darst JR, Newburger JW, Resch S, Rathod RH, Lock JE. Deciding without data. Congenit Heart Dis. 2010;5:339342.
  25. Farnan JM, Petty LA, Georgitis E, et al. A systematic review: the effect of clinical supervision on patient and residency education outcomes. Acad Med. 2012;87:428442.
  26. Haber LA, Lau CY, Sharpe BA, Arora VM, Farnan JM, Ranji SR. Effects of increased overnight supervision on resident education, decision‐making, and autonomy. J Hosp Med. 2012;7:606610.
  27. Farnan JM, Burger A, Boonayasai RT, et al. Survey of overnight academic hospitalist supervision of trainees. J Hosp Med. 2012;7:521523.
References
  1. Improving America's hospitals: The Joint Commission's annual report on quality and safety. 2007. Available at: http://www.jointcommission. org/Improving_Americas_Hospitals_The_Joint_Commissions_Annual_Report_on_Quality_and_Safety_‐_2007. Accessed October 3, 2011.
  2. US Department of Health and Human Services, Office of Inspector General. Adverse events in hospitals: methods for identifying events. 2010. Available at: http://oig.hhs.gov/oei/reports/oei‐06‐08‐00221.pdf. Accessed October 3, 2011.
  3. Arora V, Johnson J, Lovinger D, Humphrey HJ, Meltzer DO. Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Health Care. 2005;14:401407.
  4. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168:17551760.
  5. Accreditation Council for Graduate Medical Education. Common program requirements. 2010. Available at: http://acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed January 25, 2011.
  6. Volpp KG, Landrigan CP. Building physician work hour regulations from first principles and best evidence. JAMA. 2008;300:11971199.
  7. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1:257266.
  8. Eaton EG, Horvath KD, Lober WB, Rossini AJ, Pellegrini CA. A randomized, controlled trial evaluating the impact of a computerized rounding and sign‐out system on continuity of care and resident work hours. J Am Coll Surg. 2005;200:538545.
  9. Wayne J TR, Reinhardt G, Rooney D, Makoul G, Chopra S, DaRosa D. Simple standardized patient handoff system that increases accuracy and completeness. J Surg Educ. 2008;65:476485.
  10. Li P, Ali S, Tang C, Ghali WA, Stelfox HT. Review of computerized physician handoff tools for improving the quality of patient care [published online ahead of print November 20, 2012]. J Hosp Med. doi: 10.1002/jhm.1988.
  11. Sectish TC, Starmer AJ, Landrigan CP, Spector ND. Establishing a multisite education and research project requires leadership, expertise, collaboration, and an important aim. Pediatrics. 2010;126:619622.
  12. Farnan JM, Paro JA, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2009;25:129134.
  13. Starmer AJ, Spector ND, Srivastava R, Allen AD, Landrigan CP, Sectish TC. I‐pass, a mnemonic to standardize verbal handoffs. Pediatrics. 2012;129:201204.
  14. Chu ES, Reid M, Schulz T, et al. A structured handoff program for interns. Acad Med. 2009;84:347352.
  15. Nabors C, Peterson SJ, Lee WN, et al. Experience with faculty supervision of an electronic resident sign‐out system. Am J Med. 2010;123:376381.
  16. Berry JG, Hall DE, Kuo DZ, et al. Hospital utilization and characteristics of patients experiencing recurrent readmissions within children's hospitals. JAMA. 2011;305:682690.
  17. Simon TD, Berry J, Feudtner C, et al. Children with complex chronic conditions in inpatient hospital settings in the United States. Pediatrics. 2010;126:647655.
  18. Chua KP, Gordon MB, Sectish T, Landrigan CP. Effects of a night‐team system on resident sleep and work hours. Pediatrics. 2011;128:11421147.
  19. Chang VY AV, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125:491496.
  20. McSweeney ME, Lightdale JR, Vinci RJ, Moses J. Patient handoffs: pediatric resident experiences and lessons learned. Clin Pediatr (Phila). 2011;50:5763.
  21. Borowitz SM, Waggoner‐Fountain LA, Bass EJ, Sledd RM. Adequacy of information transferred at resident sign‐out (in‐hospital handover of care): a prospective survey. Qual Saf Health Care. 2008;17:610.
  22. Arora V, Johnson J. A model for building a standardized hand‐off protocol. Jt Comm J Qual Patient Saf. 2006;32:646655.
  23. Horwitz LI, Moin T, Green ML. Development and implementation of an oral sign‐out skills curriculum. J Gen Intern Med. 2007;22:14701474.
  24. Darst JR, Newburger JW, Resch S, Rathod RH, Lock JE. Deciding without data. Congenit Heart Dis. 2010;5:339342.
  25. Farnan JM, Petty LA, Georgitis E, et al. A systematic review: the effect of clinical supervision on patient and residency education outcomes. Acad Med. 2012;87:428442.
  26. Haber LA, Lau CY, Sharpe BA, Arora VM, Farnan JM, Ranji SR. Effects of increased overnight supervision on resident education, decision‐making, and autonomy. J Hosp Med. 2012;7:606610.
  27. Farnan JM, Burger A, Boonayasai RT, et al. Survey of overnight academic hospitalist supervision of trainees. J Hosp Med. 2012;7:521523.
Issue
Journal of Hospital Medicine - 8(6)
Issue
Journal of Hospital Medicine - 8(6)
Page Number
328-333
Page Number
328-333
Publications
Publications
Article Type
Display Headline
Answering questions on call: Pediatric resident physicians' use of handoffs and other resources
Display Headline
Answering questions on call: Pediatric resident physicians' use of handoffs and other resources
Sections
Article Source

Copyright © 2013 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Maireade E. McSweeney, MD, Division of Gastroenterology and Nutrition, Boston Children's Hospital, Boston, MA 02115; Telephone: 617‐355‐7036; Fax: 617–730‐0495; E‐mail: maireade.mcsweeney@childrens.harvard.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Development of the PRIS Network

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Development of the pediatric research in inpatient settings (PRIS) network: Lessons learned

Since the term hospitalist was coined in 1996,1 the field of hospital medicine has grown exponentially. Hospitalists are caring for increasing numbers of adultsincluding Medicare beneficiaries in hospitals across the United States.2 Pediatric hospital medicine has grown in parallel. By 1998, 50% of pediatric department chairs across the US and Canada had implemented hospitalist programs, with another 27% reporting they were soon to do so.3 A bit more than a decade later, pediatric hospitalists can be found in nearly every major academic medical center, and in a large proportion of community hospitals throughout the US and Canada.

In the past several years, major advances have begun to occur in the manner in which hospital medicine research is conducted. In this article, we will describe the manner in which pediatric hospital medicine research has advanced over the past several years, culminating in the conduct of several large multicenter research projects through the Pediatric Research in Inpatient Settings (PRIS) Network. We believe that lessons learned in the development of PRIS could help foster the growth of other current and future networks of hospitalist researchers, and lay the groundwork for national improvement efforts.

HOSPITAL MEDICINE RESEARCH: GROWTH AND DEVELOPMENT

In 2001, a small group of thought leaders in pediatric hospital medicine (see Acknowledgements) conceived the notion of starting a hospitalist research network, which they named the Pediatric Research in Inpatient Settings (PRIS) Network.4 PRIS was modeled in part after a successful pediatric primary care network.5 Since hospitalists in institutions across the country were being tasked to improve the care of hospitalized patients, and to lead diverse quality and safety initiatives, why not create a network to facilitate identification of high priority problems and evidence‐based approaches to them, and coordinate improvement efforts? The ambitious goal of the fledgling network was to conduct transformative research into inpatient healthcare delivery and discover both condition‐dependent and condition‐independent processes of care that were linked to patient outcomes.

PRIS began as (and remains) an open research networkfrom the outset, any hospitalist could join. The notion of this network, even in its earliest stages, was sufficiently appealing to professional societies that the Society of Hospital Medicine (SHM), the Academic Pediatric Association (APA), and the American Academy of Pediatrics (AAP) agreed to cosponsor the network, fostering its early growth. The community of pediatric hospitalists was tremendously supportive as well; over 300 hospitalists initially signed up to participate. Initial studies were generated through surveys of members, through which variability in systemic organization and variation in the management of clinical conditions and systems‐based issues across inpatient settings was identified and quantified.68

In the 2000s, as PRIS grew as a network, the research capacity of individuals within the field also grew. An increasing number of hospitalists began dedicating their academic careers to pursuing rigorous methodological training and conducting pediatric hospital medicine research. A series of studies began to emerge analyzing data from large administrative datasets that described the variation in hospital care (but lack clinical results and clinical outcomes outside of the hospital setting), such as the Pediatric Health Information Systems (PHIS) database operated by the Children's Hospital Association (formerly known as the Child Health Corporation of America).913 Pediatric hospital medicine fellowships began to appear,14 and over time, a cohort of hospitalist investigators with sufficient independence to mentor others arose.

THE REDESIGN OF PRIS

In 2009, a Pediatric Hospital Medicine Roundtable of 22 international leaders was convened under the guidance of SHM, APA, and AAP.15 This initiative, roughly a decade after the inception of the field, was critical to bringing pediatric hospitalist research and PRIS to the next level. It was recognized in that meeting that while PRIS had made a good start, it would not be possible to grow the network to the point of conducting top quality multicenter studies without the active involvement of a larger number of rigorously trained hospitalist researchers. To stimulate the network's growth, the existing PRIS Steering Committeea diverse group of clinical, educational, administrative, and research leaders in the fieldfacilitated the transfer of leadership to a new Executive Council led entirely by trained researchers (see Table 1), with the support of the APA. The Executive Council subsequently developed a series of standard operating procedures (see Table 2) that have created a transparent process to deal with important, but often difficult, academic issues that networks face.

Research Experience of the Individual Investigators
  • NOTE: Eight executive council members from 6 years of prior data. Abbreviations: NIH, National Institutes of Health.

Published papers, total number of papers: 150
Grants awarded, funding $3.7 million
Grants pending, funding $3.3 million
Research positions included director of research center, NIH study sections, national research committees, journal editorial experience
Mentors to junior faculty, fellows, and housestaff
However, no division chief or professor rank at the time of the executive council creation (this has since changed)
Governance and Standard Operating Procedures for the PRIS Network
  • Abbreviations: PRIS, Pediatric Research in Inpatient Settings.

Mission
Vision
Values
Objectives (first 5 years)
Organizational structure (executive council, ex officio members, advisory group, staff and participant organizations/member hospitalist groups)
Authorship and publication
Institutional review board approval
Protocol selection and review
Network funding
Ancillary studies
Adverse event reporting
Site monitoring

DEVELOPMENT OF MULTICENTER RESEARCH PROJECTS

The redesign of PRIS did not alter its objective: to build the evidence base regarding the optimal inpatient management of children. Evidence on how best to care for many pediatric conditions remains lacking, largely due to the facts that: a) death, the most definitive and readily measured of outcomes, is rare in pediatric hospitals; b) many pediatric conditions are relatively uncommon in any single hospital; and c) few validated, well‐developed metrics of inpatient pediatric quality exist.

As PRIS sought to launch multicenter studies of inpatient care quality, it continued to receive strong support from the APA, SHM, and AAP, and gained the support of a new partner, the Children's Hospital Association, which is comprised of a large group of children's hospitals across Canada and the US. The membership of PRIS grew to involve over 600 pediatric hospitalists from more than 75 hospitals.4 With a core group of funded hospitalist investigators, and strong support from partner organizations, the network sought and received funding for 3 major studies that are currently underway. Release of the federal government's Affordable Care Act and Comparative Effectiveness Research portfolio stimulated much of this work, stimulating the network to reach out to existing and new stakeholders and successfully compete for several multicenter studies.

Prioritization Project

Through its Prioritization Project ($1.6 million over 3 years, Children's Hospital Association), PRIS is using data on over 3.5 million hospitalizations in the PHIS database to identify conditions that are prevalent and costly, and whose management varies highly across institutions.16 After identifying the top ranked medical and surgical conditions for further study, the project is conducting drill downs in which the reasons for variation are being sought. By partnering with hospital and clinical leadership at these hospitals, and producing a data‐driven approach to prioritization, PRIS aims to conduct collaborative research and improvement work across hospitals that aim to understand and reduce the unwarranted variation in resource utilization for several of these conditions, and measure the impact of such efforts on patient and cost outcomes.

PHIS+

PHIS+ ($9 million over 3 years, Agency for Healthcare Research and Quality) is a project that is taking electronically stored laboratory, microbiology, and radiology data from 6 children's hospitals, with diverse electronic health record systems, to build a robust new database.17 The project also funds several comparative effectiveness projects (several of which are either high prevalence, high cost, or exhibit high variation in resource utilization, as demonstrated in the Prioritization Project) that are being carried out using this new database. This PHIS+ database will serve as an ongoing resource for hospitalist and subspecialist investigators interested in evaluating and improving the care of hospitalized children across multiple medical centers at once.

I‐PASS

Innovation in Pediatric Education (IIPE)‐PRIS Accelerating Safe Sign‐outs (I‐PASS) ($3 million over 3 years, Department of Health and Human Services) is a research and improvement project that is evaluating the effects on patient safety, resident experience, and diverse care processes of implementing a bundle of interventions designed to improve handoffs at change of shift.18, 19 It is one of the first multicenter educational improvement projects of its kind. Given the commonalities between change‐of‐shift handoffs in pediatrics and other fields, and the commonalities between different types of handoffs in the inpatient and outpatient setting, I‐PASS may yield communication and improvement lessons that extend beyond the confines of the study population itself.

The strategic focus of these 3 grants was to develop studies that are relevant for both the membership of practicing hospitalists and appealing to the stakeholders of the network. PRIS intends that these 3 projects will be but the first few in a long series of studies led by investigators nationwide who are interested in better understanding, and advancing the care of hospitalized children.

RELEVANCE TO OTHER NETWORKS

We believe that the story of PRIS' development, current studies, and future plans has relevance to other adult, as well as pediatric, hospital medicine networks (see Table 3). As in pediatrics, a growing group of midcareer adult hospital medicine investigators has emerged, with proven track records in attracting federal funding and conducting research germane to our field. Some have previously worked together on large‐scale multisite studies.2023 A core group have come together to form the HOspital MEdicine Reengineering Network (HOMERUN).24 HOMERUN has recently secured funding from the Association of American Medical Colleges (AAMC) for a project that is linking clinical data from several hospitals to a centralized database, a project analogous to PHIS+, and will allow for Comparative Effectiveness Research studies that have more accurate case ascertainment (by using clinical data to build cohorts) and ensuring additional power by securing a larger number of cases. Defining which clinical questions to address first will help establish this new entity as a leader in hospital medicine research. Attracting stakeholder involvement will help make these endeavors successful. In recent months, PRIS and HOMERUN jointly collaborated on the submission of a large Centers for Medicare and Medicaid Innovation (CMMI) proposal to extend the work of I‐PASS to include several internal medicine and additional pediatric resident and hospitalist care settings. Future collaborations between networks may help foster more rapid advances in care.

Key Lessons Learned
Governance involves hospitalist investigators
In‐person governance meetings to ensure/gauge buy‐in
Stable infrastructure critical for success
Mentoring important for succession
Grants to fund large‐scale projects demonstrate track record for network
MembershipWhat do members want/need?

Another pediatric hospitalist network has also emerged in the past few years, with a focus on quality improvement across inpatient pediatric settings, the Value in Pediatrics (VIP) Network.25 Although still early in its development, VIP has already successfully engaged in national quality improvement work regarding benchmarking care provided for children with bronchiolitis, reducing patient identification (ID) band errors, and improving discharge communications. VIP recently became part of the AAP's Quality Improvement Innovation Network (QuINN) group through which it is receiving infrastructure support.

As they develop, hospital medicine research and improvement networks will seek to systematically design and rigorously execute multicenter projects that provide answers to those clinical questions which practicing hospitalists face on a daily basis. As they do so, mentoring of both junior investigators and novice investigators will be necessary for the longevity of networks. To foster junior investigators, PRIS has undertaken a series of workshops presented at various national conferences, in addition to working with junior investigators directly on its currently funded studies.

CONCLUSION

Hospitalists' engagement in research and quality improvement networks builds upon their already successful engagement in clinical care, education, and quality improvement at a local level. A research and improvement mission that is tightly coupled with the day‐to‐day needs of these other important hospitalist activities creates a synergy with the potential to lead to transformative advances in patient care. If hospitalists can discover how best to deliver care, train the next generation of providers, and work to implement needed improvements at a local and national level, they will have an unprecedented opportunity to improve the care and health of children and adults.

Acknowledgements

The authors acknowledge the PRIS Network. They offer profound thanks to the members of the PRIS Steering Committee who founded the network and served throughout its initial 8 years (20012009), without whom the network would never have been launched: Mary Ottolini, Jack Percelay, Dan Rauch, Erin Stucky, and David Zipes (in addition to C.P.L.); and the current PRIS Executive Council who are leading the network: Patrick Conway, Ron Keren, Sanjay Mahant, Samir Shah, Tamara Simon, Joel Tieder, and Karen Wilson (in addition to C.P.L. and R.S.).

Note Added in Proof

Disclosures: I‐PASS is funded by grant 1R18AE00002901, from the Department of Health and Human Resources (DHHR). PHIS+ is funded by grant 1R01HSO986201, from the Agency for Healthcare Research and Quality (AHRQ). The Prioritization Project is funded by a grant from the Children's Hospital Association (CHA). The PRIS Network has received support from CHA, APA, AAP, and SHM. C.P.L. and R.S. are both Executive Council members of the PRIS Network and receive support from CHA.

Files
References
  1. Wachter RM,Goldman L.The emerging role of “hospitalists” in the American health care system.N Engl J Med.1996;335(7):514517.
  2. Kuo YF,Sharma G,Freeman JL,Goodwin JS.Growth in the care of older patients by hospitalists in the United States.N Engl J Med.2009;360(11):11021112.
  3. Srivastava R,Landrigan C,Gidwani P,Harary OH,Muret‐Wagstaff S,Homer CJ.Pediatric hospitalists in Canada and the United States: a survey of pediatric academic department chairs.Ambul Pediatr.2001;1(6):338339.
  4. Pediatric Research in Inpatient Settings. Available at: http://www.prisnetwork.org. Accessed June 21, 2012.
  5. Wasserman RC,Slora EJ,Bocian AB, et al.Pediatric research in office settings (PROS): a national practice‐based research network to improve children's health care.Pediatrics.1998;102(6):13501357.
  6. Landrigan CP,Conway PH,Stucky ER,Chiang VW,Ottolini MC.Variation in pediatric hospitalists' use of proven and unproven therapies: a study from the Pediatric Research in Inpatient Settings (PRIS) network.J Hosp Med.2008;3(4):292298.
  7. Conway PH,Edwards S,Stucky ER,Chiang VW,Ottolini MC,Landrigan CP.Variations in management of common inpatient pediatric illnesses: hospitalists and community pediatricians.Pediatrics.2006;118(2):441447.
  8. Mittal VS,Sigrest T,Ottolini MC, et al.Family‐centered rounds on pediatric wards: a PRIS network survey of US and Canadian hospitalists.Pediatrics.2010;126(1):3743.
  9. Shah SS,DiCristina CM,Bell LM,Ten Have T,Metlay JP.Primary early thoracoscopy and reduction in length of hospital stay and additional procedures among children with complicated pneumonia: results of a multicenter retrospective cohort study.Arch Pediatr Adolesc Med.2008;162(7):675681.
  10. Simon TD,Hall M,Riva‐Cambrin J, et al.Infection rates following initial cerebrospinal fluid shunt placement across pediatric hospitals in the United States. Clinical article.J Neurosurg Pediatr.2009;4(2):156165.
  11. Srivastava R,Berry JG,Hall M, et al.Reflux related hospital admissions after fundoplication in children with neurological impairment: retrospective cohort study.BMJ.2009;339:b4411.
  12. Tieder JS,Robertson A,Garrison MM.Pediatric hospital adherence to the standard of care for acute gastroenteritis.Pediatrics.2009;124(6):e10811087.
  13. Zaoutis T,Localio AR,Leckerman K,Saddlemire S,Bertoch D,Keren R.Prolonged intravenous therapy versus early transition to oral antimicrobial therapy for acute osteomyelitis in children.Pediatrics.2009;123(2):636642.
  14. Freed GL,Dunham KM.Characteristics of pediatric hospital medicine fellowships and training programs.J Hosp Med.2009;4(3):157163.
  15. Rauch DA,Lye PS,Carlson D, et al.Pediatric hospital medicine: a strategic planning roundtable to chart the future.J Hosp Med.2012;7(4):329334.
  16. Keren R,Luan X,Localio AR, et al.A novel method for prioritizating comparative effectiveness research topics.Arch Pediatr Adolesc Med. In press.
  17. Narus S,Srivastava R,Gouripeddi R, et al.Federating clinical data from six pediatric hospitals: process and initial results from the PHIS+ Consortium. In:Improving Health: Informatics and IT Changing the World. Proceedings of the AMIA 2011 Annual Symposium,Washington, DC, October 22–26,2011:994–1003. Epub 2011 October 22.
  18. Sectish TC,Starmer AJ,Landrigan CP,Spector ND.Establishing a multisite education and research project requires leadership, expertise, collaboration, and an important aim.Pediatrics.2010;126(4):619622.
  19. Starmer AJ,Spector ND,Srivastava R,Allen AD,Landrigan CP,Sectish TC.I‐PASS, a mnemonic to standardize verbal handoffs.Pediatrics.2012;129(2):201204.
  20. Auerbach AD,Katz R,Pantilat SZ, et al.Factors associated with discussion of care plans and code status at the time of hospital admission: results from the Multicenter Hospitalist Study.J Hosp Med.2008;3(6):437445.
  21. Go JT,Vaughan‐Sarrazin M,Auerbach A, et al.Do hospitalists affect clinical outcomes and efficiency for patients with acute upper gastrointestinal hemorrhage (UGIH)?J Hosp Med.2010;5(3):133139.
  22. Hasan O,Meltzer DO,Shaykevich SA, et al.Hospital readmission in general medicine patients: a prediction model.J Gen Intern Med.2010;25(3):211219.
  23. Anderson WG,Pantilat SZ,Meltzer D, et al.Code status discussions at hospital admission are not associated with patient and surrogate satisfaction with hospital care: results from the Multicenter Hospitalist Study.Am J Hosp Palliat Care.2011;28(2):102108.
  24. HOMERUN. i2b2 Wiki, HOMERUN page. Available at: https://community.i2b2.org/wiki/display/HOMERUN/HOMERUN+Home. Accessed March 9, 2011.
  25. Value in Pediatrics Network Homepage. Available at: http://www.phm‐vipnetwork.com. Accessed June 21, 2012.
Article PDF
Issue
Journal of Hospital Medicine - 7(8)
Publications
Page Number
661-664
Sections
Files
Files
Article PDF
Article PDF

Since the term hospitalist was coined in 1996,1 the field of hospital medicine has grown exponentially. Hospitalists are caring for increasing numbers of adultsincluding Medicare beneficiaries in hospitals across the United States.2 Pediatric hospital medicine has grown in parallel. By 1998, 50% of pediatric department chairs across the US and Canada had implemented hospitalist programs, with another 27% reporting they were soon to do so.3 A bit more than a decade later, pediatric hospitalists can be found in nearly every major academic medical center, and in a large proportion of community hospitals throughout the US and Canada.

In the past several years, major advances have begun to occur in the manner in which hospital medicine research is conducted. In this article, we will describe the manner in which pediatric hospital medicine research has advanced over the past several years, culminating in the conduct of several large multicenter research projects through the Pediatric Research in Inpatient Settings (PRIS) Network. We believe that lessons learned in the development of PRIS could help foster the growth of other current and future networks of hospitalist researchers, and lay the groundwork for national improvement efforts.

HOSPITAL MEDICINE RESEARCH: GROWTH AND DEVELOPMENT

In 2001, a small group of thought leaders in pediatric hospital medicine (see Acknowledgements) conceived the notion of starting a hospitalist research network, which they named the Pediatric Research in Inpatient Settings (PRIS) Network.4 PRIS was modeled in part after a successful pediatric primary care network.5 Since hospitalists in institutions across the country were being tasked to improve the care of hospitalized patients, and to lead diverse quality and safety initiatives, why not create a network to facilitate identification of high priority problems and evidence‐based approaches to them, and coordinate improvement efforts? The ambitious goal of the fledgling network was to conduct transformative research into inpatient healthcare delivery and discover both condition‐dependent and condition‐independent processes of care that were linked to patient outcomes.

PRIS began as (and remains) an open research networkfrom the outset, any hospitalist could join. The notion of this network, even in its earliest stages, was sufficiently appealing to professional societies that the Society of Hospital Medicine (SHM), the Academic Pediatric Association (APA), and the American Academy of Pediatrics (AAP) agreed to cosponsor the network, fostering its early growth. The community of pediatric hospitalists was tremendously supportive as well; over 300 hospitalists initially signed up to participate. Initial studies were generated through surveys of members, through which variability in systemic organization and variation in the management of clinical conditions and systems‐based issues across inpatient settings was identified and quantified.68

In the 2000s, as PRIS grew as a network, the research capacity of individuals within the field also grew. An increasing number of hospitalists began dedicating their academic careers to pursuing rigorous methodological training and conducting pediatric hospital medicine research. A series of studies began to emerge analyzing data from large administrative datasets that described the variation in hospital care (but lack clinical results and clinical outcomes outside of the hospital setting), such as the Pediatric Health Information Systems (PHIS) database operated by the Children's Hospital Association (formerly known as the Child Health Corporation of America).913 Pediatric hospital medicine fellowships began to appear,14 and over time, a cohort of hospitalist investigators with sufficient independence to mentor others arose.

THE REDESIGN OF PRIS

In 2009, a Pediatric Hospital Medicine Roundtable of 22 international leaders was convened under the guidance of SHM, APA, and AAP.15 This initiative, roughly a decade after the inception of the field, was critical to bringing pediatric hospitalist research and PRIS to the next level. It was recognized in that meeting that while PRIS had made a good start, it would not be possible to grow the network to the point of conducting top quality multicenter studies without the active involvement of a larger number of rigorously trained hospitalist researchers. To stimulate the network's growth, the existing PRIS Steering Committeea diverse group of clinical, educational, administrative, and research leaders in the fieldfacilitated the transfer of leadership to a new Executive Council led entirely by trained researchers (see Table 1), with the support of the APA. The Executive Council subsequently developed a series of standard operating procedures (see Table 2) that have created a transparent process to deal with important, but often difficult, academic issues that networks face.

Research Experience of the Individual Investigators
  • NOTE: Eight executive council members from 6 years of prior data. Abbreviations: NIH, National Institutes of Health.

Published papers, total number of papers: 150
Grants awarded, funding $3.7 million
Grants pending, funding $3.3 million
Research positions included director of research center, NIH study sections, national research committees, journal editorial experience
Mentors to junior faculty, fellows, and housestaff
However, no division chief or professor rank at the time of the executive council creation (this has since changed)
Governance and Standard Operating Procedures for the PRIS Network
  • Abbreviations: PRIS, Pediatric Research in Inpatient Settings.

Mission
Vision
Values
Objectives (first 5 years)
Organizational structure (executive council, ex officio members, advisory group, staff and participant organizations/member hospitalist groups)
Authorship and publication
Institutional review board approval
Protocol selection and review
Network funding
Ancillary studies
Adverse event reporting
Site monitoring

DEVELOPMENT OF MULTICENTER RESEARCH PROJECTS

The redesign of PRIS did not alter its objective: to build the evidence base regarding the optimal inpatient management of children. Evidence on how best to care for many pediatric conditions remains lacking, largely due to the facts that: a) death, the most definitive and readily measured of outcomes, is rare in pediatric hospitals; b) many pediatric conditions are relatively uncommon in any single hospital; and c) few validated, well‐developed metrics of inpatient pediatric quality exist.

As PRIS sought to launch multicenter studies of inpatient care quality, it continued to receive strong support from the APA, SHM, and AAP, and gained the support of a new partner, the Children's Hospital Association, which is comprised of a large group of children's hospitals across Canada and the US. The membership of PRIS grew to involve over 600 pediatric hospitalists from more than 75 hospitals.4 With a core group of funded hospitalist investigators, and strong support from partner organizations, the network sought and received funding for 3 major studies that are currently underway. Release of the federal government's Affordable Care Act and Comparative Effectiveness Research portfolio stimulated much of this work, stimulating the network to reach out to existing and new stakeholders and successfully compete for several multicenter studies.

Prioritization Project

Through its Prioritization Project ($1.6 million over 3 years, Children's Hospital Association), PRIS is using data on over 3.5 million hospitalizations in the PHIS database to identify conditions that are prevalent and costly, and whose management varies highly across institutions.16 After identifying the top ranked medical and surgical conditions for further study, the project is conducting drill downs in which the reasons for variation are being sought. By partnering with hospital and clinical leadership at these hospitals, and producing a data‐driven approach to prioritization, PRIS aims to conduct collaborative research and improvement work across hospitals that aim to understand and reduce the unwarranted variation in resource utilization for several of these conditions, and measure the impact of such efforts on patient and cost outcomes.

PHIS+

PHIS+ ($9 million over 3 years, Agency for Healthcare Research and Quality) is a project that is taking electronically stored laboratory, microbiology, and radiology data from 6 children's hospitals, with diverse electronic health record systems, to build a robust new database.17 The project also funds several comparative effectiveness projects (several of which are either high prevalence, high cost, or exhibit high variation in resource utilization, as demonstrated in the Prioritization Project) that are being carried out using this new database. This PHIS+ database will serve as an ongoing resource for hospitalist and subspecialist investigators interested in evaluating and improving the care of hospitalized children across multiple medical centers at once.

I‐PASS

Innovation in Pediatric Education (IIPE)‐PRIS Accelerating Safe Sign‐outs (I‐PASS) ($3 million over 3 years, Department of Health and Human Services) is a research and improvement project that is evaluating the effects on patient safety, resident experience, and diverse care processes of implementing a bundle of interventions designed to improve handoffs at change of shift.18, 19 It is one of the first multicenter educational improvement projects of its kind. Given the commonalities between change‐of‐shift handoffs in pediatrics and other fields, and the commonalities between different types of handoffs in the inpatient and outpatient setting, I‐PASS may yield communication and improvement lessons that extend beyond the confines of the study population itself.

The strategic focus of these 3 grants was to develop studies that are relevant for both the membership of practicing hospitalists and appealing to the stakeholders of the network. PRIS intends that these 3 projects will be but the first few in a long series of studies led by investigators nationwide who are interested in better understanding, and advancing the care of hospitalized children.

RELEVANCE TO OTHER NETWORKS

We believe that the story of PRIS' development, current studies, and future plans has relevance to other adult, as well as pediatric, hospital medicine networks (see Table 3). As in pediatrics, a growing group of midcareer adult hospital medicine investigators has emerged, with proven track records in attracting federal funding and conducting research germane to our field. Some have previously worked together on large‐scale multisite studies.2023 A core group have come together to form the HOspital MEdicine Reengineering Network (HOMERUN).24 HOMERUN has recently secured funding from the Association of American Medical Colleges (AAMC) for a project that is linking clinical data from several hospitals to a centralized database, a project analogous to PHIS+, and will allow for Comparative Effectiveness Research studies that have more accurate case ascertainment (by using clinical data to build cohorts) and ensuring additional power by securing a larger number of cases. Defining which clinical questions to address first will help establish this new entity as a leader in hospital medicine research. Attracting stakeholder involvement will help make these endeavors successful. In recent months, PRIS and HOMERUN jointly collaborated on the submission of a large Centers for Medicare and Medicaid Innovation (CMMI) proposal to extend the work of I‐PASS to include several internal medicine and additional pediatric resident and hospitalist care settings. Future collaborations between networks may help foster more rapid advances in care.

Key Lessons Learned
Governance involves hospitalist investigators
In‐person governance meetings to ensure/gauge buy‐in
Stable infrastructure critical for success
Mentoring important for succession
Grants to fund large‐scale projects demonstrate track record for network
MembershipWhat do members want/need?

Another pediatric hospitalist network has also emerged in the past few years, with a focus on quality improvement across inpatient pediatric settings, the Value in Pediatrics (VIP) Network.25 Although still early in its development, VIP has already successfully engaged in national quality improvement work regarding benchmarking care provided for children with bronchiolitis, reducing patient identification (ID) band errors, and improving discharge communications. VIP recently became part of the AAP's Quality Improvement Innovation Network (QuINN) group through which it is receiving infrastructure support.

As they develop, hospital medicine research and improvement networks will seek to systematically design and rigorously execute multicenter projects that provide answers to those clinical questions which practicing hospitalists face on a daily basis. As they do so, mentoring of both junior investigators and novice investigators will be necessary for the longevity of networks. To foster junior investigators, PRIS has undertaken a series of workshops presented at various national conferences, in addition to working with junior investigators directly on its currently funded studies.

CONCLUSION

Hospitalists' engagement in research and quality improvement networks builds upon their already successful engagement in clinical care, education, and quality improvement at a local level. A research and improvement mission that is tightly coupled with the day‐to‐day needs of these other important hospitalist activities creates a synergy with the potential to lead to transformative advances in patient care. If hospitalists can discover how best to deliver care, train the next generation of providers, and work to implement needed improvements at a local and national level, they will have an unprecedented opportunity to improve the care and health of children and adults.

Acknowledgements

The authors acknowledge the PRIS Network. They offer profound thanks to the members of the PRIS Steering Committee who founded the network and served throughout its initial 8 years (20012009), without whom the network would never have been launched: Mary Ottolini, Jack Percelay, Dan Rauch, Erin Stucky, and David Zipes (in addition to C.P.L.); and the current PRIS Executive Council who are leading the network: Patrick Conway, Ron Keren, Sanjay Mahant, Samir Shah, Tamara Simon, Joel Tieder, and Karen Wilson (in addition to C.P.L. and R.S.).

Note Added in Proof

Disclosures: I‐PASS is funded by grant 1R18AE00002901, from the Department of Health and Human Resources (DHHR). PHIS+ is funded by grant 1R01HSO986201, from the Agency for Healthcare Research and Quality (AHRQ). The Prioritization Project is funded by a grant from the Children's Hospital Association (CHA). The PRIS Network has received support from CHA, APA, AAP, and SHM. C.P.L. and R.S. are both Executive Council members of the PRIS Network and receive support from CHA.

Since the term hospitalist was coined in 1996,1 the field of hospital medicine has grown exponentially. Hospitalists are caring for increasing numbers of adultsincluding Medicare beneficiaries in hospitals across the United States.2 Pediatric hospital medicine has grown in parallel. By 1998, 50% of pediatric department chairs across the US and Canada had implemented hospitalist programs, with another 27% reporting they were soon to do so.3 A bit more than a decade later, pediatric hospitalists can be found in nearly every major academic medical center, and in a large proportion of community hospitals throughout the US and Canada.

In the past several years, major advances have begun to occur in the manner in which hospital medicine research is conducted. In this article, we will describe the manner in which pediatric hospital medicine research has advanced over the past several years, culminating in the conduct of several large multicenter research projects through the Pediatric Research in Inpatient Settings (PRIS) Network. We believe that lessons learned in the development of PRIS could help foster the growth of other current and future networks of hospitalist researchers, and lay the groundwork for national improvement efforts.

HOSPITAL MEDICINE RESEARCH: GROWTH AND DEVELOPMENT

In 2001, a small group of thought leaders in pediatric hospital medicine (see Acknowledgements) conceived the notion of starting a hospitalist research network, which they named the Pediatric Research in Inpatient Settings (PRIS) Network.4 PRIS was modeled in part after a successful pediatric primary care network.5 Since hospitalists in institutions across the country were being tasked to improve the care of hospitalized patients, and to lead diverse quality and safety initiatives, why not create a network to facilitate identification of high priority problems and evidence‐based approaches to them, and coordinate improvement efforts? The ambitious goal of the fledgling network was to conduct transformative research into inpatient healthcare delivery and discover both condition‐dependent and condition‐independent processes of care that were linked to patient outcomes.

PRIS began as (and remains) an open research networkfrom the outset, any hospitalist could join. The notion of this network, even in its earliest stages, was sufficiently appealing to professional societies that the Society of Hospital Medicine (SHM), the Academic Pediatric Association (APA), and the American Academy of Pediatrics (AAP) agreed to cosponsor the network, fostering its early growth. The community of pediatric hospitalists was tremendously supportive as well; over 300 hospitalists initially signed up to participate. Initial studies were generated through surveys of members, through which variability in systemic organization and variation in the management of clinical conditions and systems‐based issues across inpatient settings was identified and quantified.68

In the 2000s, as PRIS grew as a network, the research capacity of individuals within the field also grew. An increasing number of hospitalists began dedicating their academic careers to pursuing rigorous methodological training and conducting pediatric hospital medicine research. A series of studies began to emerge analyzing data from large administrative datasets that described the variation in hospital care (but lack clinical results and clinical outcomes outside of the hospital setting), such as the Pediatric Health Information Systems (PHIS) database operated by the Children's Hospital Association (formerly known as the Child Health Corporation of America).913 Pediatric hospital medicine fellowships began to appear,14 and over time, a cohort of hospitalist investigators with sufficient independence to mentor others arose.

THE REDESIGN OF PRIS

In 2009, a Pediatric Hospital Medicine Roundtable of 22 international leaders was convened under the guidance of SHM, APA, and AAP.15 This initiative, roughly a decade after the inception of the field, was critical to bringing pediatric hospitalist research and PRIS to the next level. It was recognized in that meeting that while PRIS had made a good start, it would not be possible to grow the network to the point of conducting top quality multicenter studies without the active involvement of a larger number of rigorously trained hospitalist researchers. To stimulate the network's growth, the existing PRIS Steering Committeea diverse group of clinical, educational, administrative, and research leaders in the fieldfacilitated the transfer of leadership to a new Executive Council led entirely by trained researchers (see Table 1), with the support of the APA. The Executive Council subsequently developed a series of standard operating procedures (see Table 2) that have created a transparent process to deal with important, but often difficult, academic issues that networks face.

Research Experience of the Individual Investigators
  • NOTE: Eight executive council members from 6 years of prior data. Abbreviations: NIH, National Institutes of Health.

Published papers, total number of papers: 150
Grants awarded, funding $3.7 million
Grants pending, funding $3.3 million
Research positions included director of research center, NIH study sections, national research committees, journal editorial experience
Mentors to junior faculty, fellows, and housestaff
However, no division chief or professor rank at the time of the executive council creation (this has since changed)
Governance and Standard Operating Procedures for the PRIS Network
  • Abbreviations: PRIS, Pediatric Research in Inpatient Settings.

Mission
Vision
Values
Objectives (first 5 years)
Organizational structure (executive council, ex officio members, advisory group, staff and participant organizations/member hospitalist groups)
Authorship and publication
Institutional review board approval
Protocol selection and review
Network funding
Ancillary studies
Adverse event reporting
Site monitoring

DEVELOPMENT OF MULTICENTER RESEARCH PROJECTS

The redesign of PRIS did not alter its objective: to build the evidence base regarding the optimal inpatient management of children. Evidence on how best to care for many pediatric conditions remains lacking, largely due to the facts that: a) death, the most definitive and readily measured of outcomes, is rare in pediatric hospitals; b) many pediatric conditions are relatively uncommon in any single hospital; and c) few validated, well‐developed metrics of inpatient pediatric quality exist.

As PRIS sought to launch multicenter studies of inpatient care quality, it continued to receive strong support from the APA, SHM, and AAP, and gained the support of a new partner, the Children's Hospital Association, which is comprised of a large group of children's hospitals across Canada and the US. The membership of PRIS grew to involve over 600 pediatric hospitalists from more than 75 hospitals.4 With a core group of funded hospitalist investigators, and strong support from partner organizations, the network sought and received funding for 3 major studies that are currently underway. Release of the federal government's Affordable Care Act and Comparative Effectiveness Research portfolio stimulated much of this work, stimulating the network to reach out to existing and new stakeholders and successfully compete for several multicenter studies.

Prioritization Project

Through its Prioritization Project ($1.6 million over 3 years, Children's Hospital Association), PRIS is using data on over 3.5 million hospitalizations in the PHIS database to identify conditions that are prevalent and costly, and whose management varies highly across institutions.16 After identifying the top ranked medical and surgical conditions for further study, the project is conducting drill downs in which the reasons for variation are being sought. By partnering with hospital and clinical leadership at these hospitals, and producing a data‐driven approach to prioritization, PRIS aims to conduct collaborative research and improvement work across hospitals that aim to understand and reduce the unwarranted variation in resource utilization for several of these conditions, and measure the impact of such efforts on patient and cost outcomes.

PHIS+

PHIS+ ($9 million over 3 years, Agency for Healthcare Research and Quality) is a project that is taking electronically stored laboratory, microbiology, and radiology data from 6 children's hospitals, with diverse electronic health record systems, to build a robust new database.17 The project also funds several comparative effectiveness projects (several of which are either high prevalence, high cost, or exhibit high variation in resource utilization, as demonstrated in the Prioritization Project) that are being carried out using this new database. This PHIS+ database will serve as an ongoing resource for hospitalist and subspecialist investigators interested in evaluating and improving the care of hospitalized children across multiple medical centers at once.

I‐PASS

Innovation in Pediatric Education (IIPE)‐PRIS Accelerating Safe Sign‐outs (I‐PASS) ($3 million over 3 years, Department of Health and Human Services) is a research and improvement project that is evaluating the effects on patient safety, resident experience, and diverse care processes of implementing a bundle of interventions designed to improve handoffs at change of shift.18, 19 It is one of the first multicenter educational improvement projects of its kind. Given the commonalities between change‐of‐shift handoffs in pediatrics and other fields, and the commonalities between different types of handoffs in the inpatient and outpatient setting, I‐PASS may yield communication and improvement lessons that extend beyond the confines of the study population itself.

The strategic focus of these 3 grants was to develop studies that are relevant for both the membership of practicing hospitalists and appealing to the stakeholders of the network. PRIS intends that these 3 projects will be but the first few in a long series of studies led by investigators nationwide who are interested in better understanding, and advancing the care of hospitalized children.

RELEVANCE TO OTHER NETWORKS

We believe that the story of PRIS' development, current studies, and future plans has relevance to other adult, as well as pediatric, hospital medicine networks (see Table 3). As in pediatrics, a growing group of midcareer adult hospital medicine investigators has emerged, with proven track records in attracting federal funding and conducting research germane to our field. Some have previously worked together on large‐scale multisite studies.2023 A core group have come together to form the HOspital MEdicine Reengineering Network (HOMERUN).24 HOMERUN has recently secured funding from the Association of American Medical Colleges (AAMC) for a project that is linking clinical data from several hospitals to a centralized database, a project analogous to PHIS+, and will allow for Comparative Effectiveness Research studies that have more accurate case ascertainment (by using clinical data to build cohorts) and ensuring additional power by securing a larger number of cases. Defining which clinical questions to address first will help establish this new entity as a leader in hospital medicine research. Attracting stakeholder involvement will help make these endeavors successful. In recent months, PRIS and HOMERUN jointly collaborated on the submission of a large Centers for Medicare and Medicaid Innovation (CMMI) proposal to extend the work of I‐PASS to include several internal medicine and additional pediatric resident and hospitalist care settings. Future collaborations between networks may help foster more rapid advances in care.

Key Lessons Learned
Governance involves hospitalist investigators
In‐person governance meetings to ensure/gauge buy‐in
Stable infrastructure critical for success
Mentoring important for succession
Grants to fund large‐scale projects demonstrate track record for network
MembershipWhat do members want/need?

Another pediatric hospitalist network has also emerged in the past few years, with a focus on quality improvement across inpatient pediatric settings, the Value in Pediatrics (VIP) Network.25 Although still early in its development, VIP has already successfully engaged in national quality improvement work regarding benchmarking care provided for children with bronchiolitis, reducing patient identification (ID) band errors, and improving discharge communications. VIP recently became part of the AAP's Quality Improvement Innovation Network (QuINN) group through which it is receiving infrastructure support.

As they develop, hospital medicine research and improvement networks will seek to systematically design and rigorously execute multicenter projects that provide answers to those clinical questions which practicing hospitalists face on a daily basis. As they do so, mentoring of both junior investigators and novice investigators will be necessary for the longevity of networks. To foster junior investigators, PRIS has undertaken a series of workshops presented at various national conferences, in addition to working with junior investigators directly on its currently funded studies.

CONCLUSION

Hospitalists' engagement in research and quality improvement networks builds upon their already successful engagement in clinical care, education, and quality improvement at a local level. A research and improvement mission that is tightly coupled with the day‐to‐day needs of these other important hospitalist activities creates a synergy with the potential to lead to transformative advances in patient care. If hospitalists can discover how best to deliver care, train the next generation of providers, and work to implement needed improvements at a local and national level, they will have an unprecedented opportunity to improve the care and health of children and adults.

Acknowledgements

The authors acknowledge the PRIS Network. They offer profound thanks to the members of the PRIS Steering Committee who founded the network and served throughout its initial 8 years (20012009), without whom the network would never have been launched: Mary Ottolini, Jack Percelay, Dan Rauch, Erin Stucky, and David Zipes (in addition to C.P.L.); and the current PRIS Executive Council who are leading the network: Patrick Conway, Ron Keren, Sanjay Mahant, Samir Shah, Tamara Simon, Joel Tieder, and Karen Wilson (in addition to C.P.L. and R.S.).

Note Added in Proof

Disclosures: I‐PASS is funded by grant 1R18AE00002901, from the Department of Health and Human Resources (DHHR). PHIS+ is funded by grant 1R01HSO986201, from the Agency for Healthcare Research and Quality (AHRQ). The Prioritization Project is funded by a grant from the Children's Hospital Association (CHA). The PRIS Network has received support from CHA, APA, AAP, and SHM. C.P.L. and R.S. are both Executive Council members of the PRIS Network and receive support from CHA.

References
  1. Wachter RM,Goldman L.The emerging role of “hospitalists” in the American health care system.N Engl J Med.1996;335(7):514517.
  2. Kuo YF,Sharma G,Freeman JL,Goodwin JS.Growth in the care of older patients by hospitalists in the United States.N Engl J Med.2009;360(11):11021112.
  3. Srivastava R,Landrigan C,Gidwani P,Harary OH,Muret‐Wagstaff S,Homer CJ.Pediatric hospitalists in Canada and the United States: a survey of pediatric academic department chairs.Ambul Pediatr.2001;1(6):338339.
  4. Pediatric Research in Inpatient Settings. Available at: http://www.prisnetwork.org. Accessed June 21, 2012.
  5. Wasserman RC,Slora EJ,Bocian AB, et al.Pediatric research in office settings (PROS): a national practice‐based research network to improve children's health care.Pediatrics.1998;102(6):13501357.
  6. Landrigan CP,Conway PH,Stucky ER,Chiang VW,Ottolini MC.Variation in pediatric hospitalists' use of proven and unproven therapies: a study from the Pediatric Research in Inpatient Settings (PRIS) network.J Hosp Med.2008;3(4):292298.
  7. Conway PH,Edwards S,Stucky ER,Chiang VW,Ottolini MC,Landrigan CP.Variations in management of common inpatient pediatric illnesses: hospitalists and community pediatricians.Pediatrics.2006;118(2):441447.
  8. Mittal VS,Sigrest T,Ottolini MC, et al.Family‐centered rounds on pediatric wards: a PRIS network survey of US and Canadian hospitalists.Pediatrics.2010;126(1):3743.
  9. Shah SS,DiCristina CM,Bell LM,Ten Have T,Metlay JP.Primary early thoracoscopy and reduction in length of hospital stay and additional procedures among children with complicated pneumonia: results of a multicenter retrospective cohort study.Arch Pediatr Adolesc Med.2008;162(7):675681.
  10. Simon TD,Hall M,Riva‐Cambrin J, et al.Infection rates following initial cerebrospinal fluid shunt placement across pediatric hospitals in the United States. Clinical article.J Neurosurg Pediatr.2009;4(2):156165.
  11. Srivastava R,Berry JG,Hall M, et al.Reflux related hospital admissions after fundoplication in children with neurological impairment: retrospective cohort study.BMJ.2009;339:b4411.
  12. Tieder JS,Robertson A,Garrison MM.Pediatric hospital adherence to the standard of care for acute gastroenteritis.Pediatrics.2009;124(6):e10811087.
  13. Zaoutis T,Localio AR,Leckerman K,Saddlemire S,Bertoch D,Keren R.Prolonged intravenous therapy versus early transition to oral antimicrobial therapy for acute osteomyelitis in children.Pediatrics.2009;123(2):636642.
  14. Freed GL,Dunham KM.Characteristics of pediatric hospital medicine fellowships and training programs.J Hosp Med.2009;4(3):157163.
  15. Rauch DA,Lye PS,Carlson D, et al.Pediatric hospital medicine: a strategic planning roundtable to chart the future.J Hosp Med.2012;7(4):329334.
  16. Keren R,Luan X,Localio AR, et al.A novel method for prioritizating comparative effectiveness research topics.Arch Pediatr Adolesc Med. In press.
  17. Narus S,Srivastava R,Gouripeddi R, et al.Federating clinical data from six pediatric hospitals: process and initial results from the PHIS+ Consortium. In:Improving Health: Informatics and IT Changing the World. Proceedings of the AMIA 2011 Annual Symposium,Washington, DC, October 22–26,2011:994–1003. Epub 2011 October 22.
  18. Sectish TC,Starmer AJ,Landrigan CP,Spector ND.Establishing a multisite education and research project requires leadership, expertise, collaboration, and an important aim.Pediatrics.2010;126(4):619622.
  19. Starmer AJ,Spector ND,Srivastava R,Allen AD,Landrigan CP,Sectish TC.I‐PASS, a mnemonic to standardize verbal handoffs.Pediatrics.2012;129(2):201204.
  20. Auerbach AD,Katz R,Pantilat SZ, et al.Factors associated with discussion of care plans and code status at the time of hospital admission: results from the Multicenter Hospitalist Study.J Hosp Med.2008;3(6):437445.
  21. Go JT,Vaughan‐Sarrazin M,Auerbach A, et al.Do hospitalists affect clinical outcomes and efficiency for patients with acute upper gastrointestinal hemorrhage (UGIH)?J Hosp Med.2010;5(3):133139.
  22. Hasan O,Meltzer DO,Shaykevich SA, et al.Hospital readmission in general medicine patients: a prediction model.J Gen Intern Med.2010;25(3):211219.
  23. Anderson WG,Pantilat SZ,Meltzer D, et al.Code status discussions at hospital admission are not associated with patient and surrogate satisfaction with hospital care: results from the Multicenter Hospitalist Study.Am J Hosp Palliat Care.2011;28(2):102108.
  24. HOMERUN. i2b2 Wiki, HOMERUN page. Available at: https://community.i2b2.org/wiki/display/HOMERUN/HOMERUN+Home. Accessed March 9, 2011.
  25. Value in Pediatrics Network Homepage. Available at: http://www.phm‐vipnetwork.com. Accessed June 21, 2012.
References
  1. Wachter RM,Goldman L.The emerging role of “hospitalists” in the American health care system.N Engl J Med.1996;335(7):514517.
  2. Kuo YF,Sharma G,Freeman JL,Goodwin JS.Growth in the care of older patients by hospitalists in the United States.N Engl J Med.2009;360(11):11021112.
  3. Srivastava R,Landrigan C,Gidwani P,Harary OH,Muret‐Wagstaff S,Homer CJ.Pediatric hospitalists in Canada and the United States: a survey of pediatric academic department chairs.Ambul Pediatr.2001;1(6):338339.
  4. Pediatric Research in Inpatient Settings. Available at: http://www.prisnetwork.org. Accessed June 21, 2012.
  5. Wasserman RC,Slora EJ,Bocian AB, et al.Pediatric research in office settings (PROS): a national practice‐based research network to improve children's health care.Pediatrics.1998;102(6):13501357.
  6. Landrigan CP,Conway PH,Stucky ER,Chiang VW,Ottolini MC.Variation in pediatric hospitalists' use of proven and unproven therapies: a study from the Pediatric Research in Inpatient Settings (PRIS) network.J Hosp Med.2008;3(4):292298.
  7. Conway PH,Edwards S,Stucky ER,Chiang VW,Ottolini MC,Landrigan CP.Variations in management of common inpatient pediatric illnesses: hospitalists and community pediatricians.Pediatrics.2006;118(2):441447.
  8. Mittal VS,Sigrest T,Ottolini MC, et al.Family‐centered rounds on pediatric wards: a PRIS network survey of US and Canadian hospitalists.Pediatrics.2010;126(1):3743.
  9. Shah SS,DiCristina CM,Bell LM,Ten Have T,Metlay JP.Primary early thoracoscopy and reduction in length of hospital stay and additional procedures among children with complicated pneumonia: results of a multicenter retrospective cohort study.Arch Pediatr Adolesc Med.2008;162(7):675681.
  10. Simon TD,Hall M,Riva‐Cambrin J, et al.Infection rates following initial cerebrospinal fluid shunt placement across pediatric hospitals in the United States. Clinical article.J Neurosurg Pediatr.2009;4(2):156165.
  11. Srivastava R,Berry JG,Hall M, et al.Reflux related hospital admissions after fundoplication in children with neurological impairment: retrospective cohort study.BMJ.2009;339:b4411.
  12. Tieder JS,Robertson A,Garrison MM.Pediatric hospital adherence to the standard of care for acute gastroenteritis.Pediatrics.2009;124(6):e10811087.
  13. Zaoutis T,Localio AR,Leckerman K,Saddlemire S,Bertoch D,Keren R.Prolonged intravenous therapy versus early transition to oral antimicrobial therapy for acute osteomyelitis in children.Pediatrics.2009;123(2):636642.
  14. Freed GL,Dunham KM.Characteristics of pediatric hospital medicine fellowships and training programs.J Hosp Med.2009;4(3):157163.
  15. Rauch DA,Lye PS,Carlson D, et al.Pediatric hospital medicine: a strategic planning roundtable to chart the future.J Hosp Med.2012;7(4):329334.
  16. Keren R,Luan X,Localio AR, et al.A novel method for prioritizating comparative effectiveness research topics.Arch Pediatr Adolesc Med. In press.
  17. Narus S,Srivastava R,Gouripeddi R, et al.Federating clinical data from six pediatric hospitals: process and initial results from the PHIS+ Consortium. In:Improving Health: Informatics and IT Changing the World. Proceedings of the AMIA 2011 Annual Symposium,Washington, DC, October 22–26,2011:994–1003. Epub 2011 October 22.
  18. Sectish TC,Starmer AJ,Landrigan CP,Spector ND.Establishing a multisite education and research project requires leadership, expertise, collaboration, and an important aim.Pediatrics.2010;126(4):619622.
  19. Starmer AJ,Spector ND,Srivastava R,Allen AD,Landrigan CP,Sectish TC.I‐PASS, a mnemonic to standardize verbal handoffs.Pediatrics.2012;129(2):201204.
  20. Auerbach AD,Katz R,Pantilat SZ, et al.Factors associated with discussion of care plans and code status at the time of hospital admission: results from the Multicenter Hospitalist Study.J Hosp Med.2008;3(6):437445.
  21. Go JT,Vaughan‐Sarrazin M,Auerbach A, et al.Do hospitalists affect clinical outcomes and efficiency for patients with acute upper gastrointestinal hemorrhage (UGIH)?J Hosp Med.2010;5(3):133139.
  22. Hasan O,Meltzer DO,Shaykevich SA, et al.Hospital readmission in general medicine patients: a prediction model.J Gen Intern Med.2010;25(3):211219.
  23. Anderson WG,Pantilat SZ,Meltzer D, et al.Code status discussions at hospital admission are not associated with patient and surrogate satisfaction with hospital care: results from the Multicenter Hospitalist Study.Am J Hosp Palliat Care.2011;28(2):102108.
  24. HOMERUN. i2b2 Wiki, HOMERUN page. Available at: https://community.i2b2.org/wiki/display/HOMERUN/HOMERUN+Home. Accessed March 9, 2011.
  25. Value in Pediatrics Network Homepage. Available at: http://www.phm‐vipnetwork.com. Accessed June 21, 2012.
Issue
Journal of Hospital Medicine - 7(8)
Issue
Journal of Hospital Medicine - 7(8)
Page Number
661-664
Page Number
661-664
Publications
Publications
Article Type
Display Headline
Development of the pediatric research in inpatient settings (PRIS) network: Lessons learned
Display Headline
Development of the pediatric research in inpatient settings (PRIS) network: Lessons learned
Sections
Article Source
Copyright © 2012 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Department of Pediatrics, University of Utah Health Sciences Center, Division of Inpatient Medicine, Primary Children's Medical Center, 100 North Mario Capecchi Dr, Salt Lake City, UT 84113
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Pediatric Hospitalist Variation in Care

Article Type
Changed
Sun, 05/28/2017 - 22:07
Display Headline
Variation in pediatric hospitalists' use of proven and unproven therapies: A study from the Pediatric Research in Inpatient Settings (PRIS) network

Reduction of undesirable variation in care has been a major focus of systematic efforts to improve the quality of the healthcare system.13 The emergence of hospitalists, physicians specializing in the care of hospitalized patients, was spurred by a desire to streamline care and reduce variability in hospital management of common diseases.4, 5 Over the past decade, hospitalist systems have become a leading vehicle for care delivery.4, 6, 7 It remains unclear, however, whether implementation of hospitalist systems has lessened undesirable variation in the inpatient management of common diseases.

While systematic reviews have found costs and hospital length of stay to be 10‐15% lower in both pediatric and internal medicine hospitalist systems, few studies have adequately assessed the processes or quality of care in hospitalist systems.8, 9 Two internal medicine studies have found decreased mortality in hospitalist systems, but the mechanism by which hospitalists apparently achieved these gains is unclear.10, 11 Even less is known about care processes or quality in pediatric hospitalist systems. Death is a rare occurrence in pediatric ward settings, and the seven studies conducted to date comparing pediatric hospitalist and traditional systems have been universally underpowered to detect differences in mortality.9, 1218 There is a need to better understand care processes as a first step in understanding and improving quality of care in hospitalist systems.19

The Pediatric Research in Inpatient Settings (PRIS) Network was formed to improve the quality of care for hospitalized children through collaborative clinical research. In this study, we sought to study variation in the care of common pediatric conditions among a cohort of pediatric hospitalists. We have previously reported that less variability exists in hospitalists' reported management of inpatient conditions than in the reported management of these same conditions by community‐based pediatricians,20 but we were concerned that substantial undesirable variation (ie, variation in practice due to uncertainty or unsubstantiated local practice traditions, rather than justified variation in care based on different risks of harms or benefits in different patients) may still exist among hospitalists. We therefore conducted a study: 1) to investigate variation in hospitalists' reported use of common inpatient therapies, and 2) to test the hypothesis that greater variation exists in hospitalists' reported use of inpatient therapies of unproven benefit than in those therapies proven to be beneficial.

METHODS

Survey Design and Administration

In 2003, we designed the PRIS Survey to collect data on hospitalists' backgrounds, practices, and training needs, as well as their management of common pediatric conditions. For the current study, we chose a priori to evaluate hospitalists' use of 14 therapies in the management of 4 common conditions: asthma, bronchiolitis, gastroenteritis, and gastro‐esophageal reflux disease (GERD) (Table 1). These four conditions were chosen for study because they were among the top discharge diagnoses (primary and secondary) from the inpatient services at 2 of the authors' institutions (Children's Hospital Boston and Children's Hospital San Diego) during the year before administration of the survey, and because a discrete set of therapeutic agents are commonly used in their management. Respondents were asked to report the frequency with which they used each of the 14 therapies of interest on 5‐point Likert scales (from 1=never to 5=almost always). The survey initially developed was piloted with a small group of hospitalists and pediatricians, and a final version incorporating revisions was subsequently administered to all pediatric hospitalists in the US and Canada identified through any of 3 sources: 1) the Pediatric Research in Inpatient Settings (PRIS) list of participants; 2) the Society for Hospital Medicine (SHM) pediatric hospital medicine e‐mail listserv; and 3) the list of all attendees of the first national pediatric hospitalist conference sponsored by the Ambulatory Pediatrics Association (APA), SHM, and American Academy of Pediatrics (AAP); this meeting was held in San Antonio, Texas, USA in November 2003. Individuals identified through more than 1 of these groups were counted only once. Potential participants were assured that individual responses would be kept confidential, and were e‐mailed an access code to participate in the online survey, using a secure web‐based interface; a paper‐based version was also made available to those who preferred to respond in this manner. Regular reminder notices were sent to all non‐responders. Further details regarding PRIS Survey recruitment and study methods have been published previously.20

Therapies and Conditions Studied
ConditionTherapyBMJ clinical evidence Treatment effect categorization*Study classification
  • Abbreviation: BMJ, British Medical Journal.

AsthmaInhaled albuterolBeneficialProven
 Systemic corticosteroidsBeneficialProven
 Inhaled ipratropium in the first 24 hours of hospitalizationBeneficialProven
 Inhaled ipratropium after the first 24 hours of hospitalizationUnknown effectivenessUnproven
BronchiolitisInhaled albuterolUnknown effectivenessUnproven
 Inhaled epinephrineUnknown effectivenessUnproven
 Systemic corticosteroidsUnknown effectivenessUnproven
GastroenteritisIntravenous hydrationBeneficialProven
 LactobacillusNot assessedUnproven
 OndansetronNot assessedUnproven
Gastro‐Esophageal Reflux Disease (GERD)H2 histamine‐receptor antagonistsUnknown effectivenessUnproven
 Thickened feedsUnknown effectiveness Likely to be beneficialUnproven Proven
 MetoclopramideUnknown effectivenessUnproven
 Proton‐pump inhibitorsUnknown effectivenessUnproven

DefinitionsReference Responses and Percent Variation

To measure variation in reported management, we first sought to determine a reference response for each therapy of interest. Since the evidence base for most of the therapies we studied is weak, it was not possible to determine a gold standard response for each therapy. Instead, we sought to measure the degree of divergence from a reference response for each therapy in the following manner. First, to simplify analyses, we collapsed our five‐category Likert scale into three categories (never/rarely, sometimes, and often/almost always). We then defined the reference response for each therapy to be never/rarely or often/almost always, whichever of the 2 was more frequently selected by respondents; sometimes was not used as a reference category, as reporting use of a particular therapy sometimes indicated substantial variability even within an individual's own practice.

Classification of therapies as proven or unproven.

To classify each of the 14 studied therapies as being of proven or unproven, we used the British Medical Journal's publication Clinical Evidence.19 We chose to use Clinical Evidence as an evidence‐based reference because it provides rigorously developed, systematic analyses of therapeutic management options for multiple common pediatric conditions, and organizes recommendations in a straightforward manner. Four of the 14 therapies had been determined on systematic review to be proven beneficial at the time of study design: systemic corticosteroids, inhaled albuterol, and ipratropium (in the first 24 h) in the care of children with asthma; and IV hydration in the care of children with acute gastroenteritis. The remaining 10 therapies were either considered to be of unknown effectiveness or had not been formally evaluated by Clinical Evidence, and were hence considered unproven for this study (Table 1). Of note, the use of thickened feeds in the treatment of children with GERD had been determined to be of unknown effectiveness at the time of study design, but was reclassified as likely to be beneficial during the course of the study.

Analyses

Descriptive statistics were used to report respondents' demographic characteristics and work environments, as well as variation in their reported use of each of the 14 therapies. Variation in hospitalists' use of proven versus unproven therapies was compared using the Wilcoxon rank sum test, as it was distributed non‐normally. For our primary analysis, the use of thickened feeds in GERD was considered unproven, but a sensitivity analysis was conducted reclassifying it as proven in light of the evolving literature on its use and its consequent reclassification in Clinical Evidence.(SAS Version 9.1, Cary, NC) was used for statistical analyses.

RESULTS

213 of the 320 individuals identified through the 3 lists of pediatric hospitalists (67%) responded to the survey. Of these, 198 (93%) identified themselves as hospitalists and were therefore included. As previously reported,20 53% of respondents were male, 55% worked in academic training environments, and 47% had completed advanced training (fellowship) beyond their core pediatric training (residency training); respondents reported completing residency training 11 9 (mean, standard deviation) years prior to the survey, and spending 176 72 days per year in the care of hospitalized patients.

Variation in reported management: asthma

(Figure 1, Panel A). Relatively little variation existed in reported use of the 4 asthma therapies studied. Only 4.4% (95% CI, 1.4‐7.4%) of respondents did not provide the reference response of using inhaled albuterol often or almost always in the care of inpatients with asthma, and only 6.0% (2.5‐9.5%) of respondents did not report using systemic corticosteroids often or almost always. Variation in reported use of ipratropium was somewhat higher.

Figure 1
Percent variation in reported use of common inpatient therapies. (T bars indicate 95% confidence intervals).

Bronchiolitis

(Figure 1B). By contrast, variation in reported use of inhaled therapies for bronchiolitis was high, with many respondents reporting that they often or always used inhaled albuterol or epinephrine, while many others reported rarely or never using them. There was 59.6% (52.4‐66.8%) variation from the reference response of often/almost always using inhaled albuterol, and 72.2% (65.6‐78.8%) variation from the reference response of never/rarely using inhaled epinephrine. Only 11.6% (6.9‐16.3%) of respondents, however, varied from the reference response of using dexamethasone more than rarely in the care of children with bronchiolitis.

Gastroenteritis

(Figure 1C). Moderate variability existed in the reported use of the 3 studied therapies for children hospitalized with gastroenteritis. 21.1% (15.1‐27.1%) of respondents did not provide the reference response of often/almost always using IV hydration; 35.9% (28.9‐42.9%) did not provide the reference response of never or rarely using lactobacillus; likewise, 35.9% (28.9‐42.9%) did not provide the reference response of never or rarely using ondansetron.

Gastro‐Esophageal Reflux Disease

(Figure 1, Panel D). There was moderate to high variability in the reported management of GERD. 22.8% (16.7‐28.9%) of respondents did not provide the reference response of often/almost always using H2 antagonists, and 44.9% (37.6‐52.2%) did not report often/almost always using thickened feeds in the care of these children. 58.3% (51.1‐65.5%) and 72.1% (65.5‐78.7%) of respondents did not provide the reference response of never/rarely using metoclopramide and proton pump inhibitors, respectively.

Proven vs. Unproven Therapies

(Figure 2). Variation in reported use of therapies of unproven benefit was significantly higher than variation in reported use of the 4 proven therapies (albuterol, corticosteroids, and ipratropium in the first 24 h for asthma; IV re‐hydration for gastroenteritis). The mean variation in reported use of unproven therapies was 44.6 20.5%, compared with 15.5 12.5% variation in reported use of therapies of proven benefit (p = 0.02).

Figure 2
Variation in reported use of proven versus unproven therapies (T bars indicate standard deviations).

As a sensitivity analysis, the use of thickened feeds as a therapy for GERD was re‐categorized as proven and the above analysis repeated, for the reasons outlined in the methods section. This did not alter the identified relationship between variability and the evidence base fundamentally; hospitalists' reported variation in use of therapies of unproven benefit in this sensitivity analysis was 44.6 21.7%, compared with 21.4 17.0% variation in reported use of proven therapies (p = 0.05).

DISCUSSION

Substantial variation exists in the inpatient management of common pediatric diseases. Although we have previously found less reported variability in pediatric hospitalists' practices than in those of community‐based pediatricians,20 the current study demonstrates a high degree of reported variation even among a cohort of inpatient specialists. Importantly, however, reported variation was found to be significantly less for those inpatient therapies supported by a robust evidence base.

Bronchiolitis, gastroenteritis, asthma, and GERD are extremely common causes of pediatric hospitalization throughout the developed world.2125 Our finding of high reported variability in the routine care of all of these conditions except asthma is concerning, as it suggests that experts do not agree on how to manage children hospitalized with even the most common childhood diseases. While we hypothesized that there would be some variation in the use of therapies whose benefit has not been well established, the high degree of variation observed is of concern because it indicates that an insufficient evidentiary base exists to support much of our day‐to‐day practice. Some variation in practice in response to differing clinical presentations is both expected and desirable, but it is remarkable that variance in practice was significantly less for the most evidence‐based therapies than for those grounded less firmly in science, suggesting that the variation identified here is not justifiable variation based on appropriate responses to atypical clinical presentations, but uncertainty in the absence of clear data. Such undesired variability may decrease system reliability (introducing avoidable opportunity for error),26 and lead to under‐use of needed therapies as well as overuse of unnecessary therapies.1

Our work extends prior research that has identified wide variation in patterns of hospital admission, use of hospital resources, and processes of inpatient care,2732 by documenting reported variation in the use of common inpatient therapies. Rates of hospital admission may vary by as much as 7‐fold across regions.33 Our study demonstrates that wide variation exists not only in admission rates, but in reported inpatient care processes for some of the most common diseases seen in pediatric hospitals. Our study also supports the hypothesis that variation in care may be driven by gaps in knowledge.32 Among hospitalists, we found the strength of the evidence base to be a major determinant of reported variability.

Our study has several limitations. First, the data presented here are derived from provider self‐reports, which may not fully reflect actual practice. In the case of the few proven therapies studied, reporting bias could lead to an over‐reporting of adherence to evidence‐based standards of care. Like our study, however, prior studies have found that hospital‐based providers fairly consistently comply with evidence‐based practice recommendations for acute asthma care,34, 35 supporting our finding that variation in acute asthma care (which represented 3 of our 4 proven therapies) is low in this setting.

Another limitation is that classifications of therapies as proven or unproven change as the evidence base evolves. Of particular relevance to this study, the use of thickened feeds as a therapy for GERD, originally classified as being of unknown effectiveness, was reclassified by Clinical Evidence during the course of the study as likely to be beneficial. The relationship we identified between proven therapies and degree of variability in care did not change when we conducted a sensitivity analysis re‐categorizing this therapy as proven, but precisely quantifying variation is complicated by continuous changes in the state of the evidence.

Pediatric hospitalist systems have been found consistently to improve the efficiency of care,9 yet this study suggests that considerable variation in hospitalists' management of key conditions remains. The Pediatric Research in Inpatient Settings (PRIS) Network was formed in 2002 to improve the care of hospitalized children and the quality of inpatient practice by developing an evidence base for inpatient pediatric care. Ongoing multi‐center research efforts through PRIS and other research networks are beginning to critically evaluate therapies used in the management of common pediatric conditions. Rigorous studies of the processes and outcomes of pediatric hospital care will inform inpatient pediatric practice, and ultimately improve the care of hospitalized children. The current study strongly affirms the urgent need to establish such an evidence base. Without data to inform optimal care, efforts to reduce undesirable variation in care and improve care quality cannot be fully realized.

Acknowledgements

The authors would like to extend their thanks to the hospitalists and members of the Pediatric Research in Inpatient Settings Network who participated in this research, as well as the Children's National Medical Center and Children's Hospital Boston Inpatient Pediatrics Services, who provided funding to support this study. Special thanks to the Ambulatory Pediatrics Association (APA), for its core support of the PRIS Network. Dr. Landrigan is the recipient of a career development award from the Agency for Healthcare Research and Quality (AHRQ K08 HS13333). Dr. Conway is the recipient of a Robert Wood Johnson Clinical Scholars Grant. All researchers were independent from the funding agencies; the academic medical centers named above, APA, and AHRQ had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; or preparation, review, or approval of the manuscript.

References
  1. Institute of Medicine: Crossing the Quality Chasm: A New Health System for the 21st Century.Washington, D.C.:National Academic Press,2001.
  2. Urbach DR.Baxter NN.Reducing variation in surgical care.BMJ2005;330:14011402.
  3. Sedrakyan A,van der MJ,Lewsey J,Treasure T.Variation in use of video assisted thoracic surgery in the United Kingdom.BMJ2004;329:10111012.
  4. Wachter RM,.Goldman L.The emerging role of “hospitalists” in the American health care system.N. Engl J Med1996;335:514517.
  5. Maviglia SM,.Bates D.Hospitalism in the USA.Lancet1999;353:1902.
  6. Society of Hospital Medicine. Growth of Hospital Medicine Nationwide. Available at: http://www.hospitalmedicine.org/Content/NavigationMenu/Media/GrowthofHospitalMedicineNationwide/Growth_of_Hospital_M.htm. Accessed April 11,2007.
  7. Terry K.The changing face of hospital practice.Med Econ2002;79:7279.
  8. Wachter RM,.Goldman L.The hospitalist movement 5 years later.JAMA2002;287:487494.
  9. Landrigan CP,Conway PH,Edwards S,Srivastava R.Pediatric hospitalists: a systematic review of the literature.Pediatrics2006;117:17361744.
  10. Auerbach AD,Wachter RM,Katz P,Showstack J,Baron RB,Goldman L.Implementation of a voluntary hospitalist service at a community teaching hospital: improved clinical efficiency and patient outcomes.Ann Intern Med2002;137:859865.
  11. Meltzer D,Manning WG,Morrison J,Shah MN,Jin L,Guth T, et al.Effects of physician experience on costs and outcomes on an academic general medicine service: results of a trial of hospitalists.Ann Intern Med2002;137:866874.
  12. Bellet PS,Whitaker RC.Evaluation of a pediatric hospitalist service: impact on length of stay and hospital charges.Pediatrics2000;105:478484.
  13. Landrigan C,Srivastava R,Muret‐Wagstaff S,Soumerai SB,Ross‐Degnan D,Graef JW,Homer CJ, and Goldmann DA.Impact of an HMO hospitalist system in academic pediatrics.Pediatrics2002;110:720728.
  14. Maggioni A,Reyes M, and Lifshitz F.Evaluation of a pediatric hospitalist service by APR‐DRG's: impact on length of stay and hospital charges.Pediatr Research2001;49(suppl),691.
  15. Wells RD,Dahl B,Wilson SD.Pediatric hospitalists: quality care for the underserved?Am J Med Qual2001;16:174180.
  16. Ogershok PR,Li X,Palmer HC,Moore RS,Weisse ME,Ferrari ND.Restructuring an academic pediatric inpatient service using concepts developed by hospitalists.Clin Pediatr (Phila)2001;40:653660.
  17. Srivastava R,Muret‐Wagstaff S,Young PC, and James BC.Hospitalist care of medically complex children.Pediatr Research2004;55(suppl),1789.
  18. Seid M,Quinn K,Kurtin PS.Hospital‐based and community pediatricians: comparing outcomes for asthma and bronchiolitis.J Clin Outcomes Manage1997;4:2124.
  19. Godlee F,Tovey D,Bedford M, et al., eds.Clinical Evidence: The International Source of the Best Available Evidence for Effective Health Care.London, United Kingdom:BMJ Publishing Group;2004.
  20. Conway PH,Edwards S,Stucky ER,Chiang VW,Ottolini MC,Landrigan CP.Variations in management of common inpatient pediatric illnesses: hospitalists and community pediatricians.Pediatrics2006;118:441447.
  21. Muller‐Pebody B,Edmunds WJ,Zambon MC,Gay NJ,Crowcroft NS.Contribution of RSV to bronchiolitis and pneumonia‐associated hospitalizations in English children, April 1995‐March 1998.Epidemiol Infect2002;129:99106.
  22. Pelletier AJ,Mansbach JM,Camargo CA.Direct medical costs of bronchiolitis hospitalizations in the United States.Pediatrics2006;118:24182423.
  23. Van Damme P,Giaquinto C,Huet F,Gothefors L,Maxwell M,Van der WM.Multicenter Prospective Study of the Burden of Rotavirus Acute Gastroenteritis in Europe, 2004‐2005: The REVEAL Study.J Infect Dis2007;195Suppl 1:S4S16.
  24. Akinbami L.The state of childhood asthma, United States, 1980‐2005.Adv.Data.2006;124.
  25. Gold BD,Freston JW.Gastroesophageal reflux in children: pathogenesis, prevalence, diagnosis, and role of proton pump inhibitors in treatment.Paediatr Drugs2002;4:673685.
  26. Luria JW,Muething SE,Schoettker PJ,Kotagal UR.Reliability science and patient safety.Pediatr Clin North Am2006;53:11211133.
  27. Wennberg JE and McAndrew Cooper M, eds.The Dartmouth Atlas of Health Care in the United States.Hanover, NH, USA:Health Forum, Inc.,1999.
  28. Perrin JM,Homer CJ,Berwick DM,Woolf AD,Freeman JL,Wennberg JE.Variations in rates of hospitalization of children in three urban communities.N Engl J Med1989;320:11831187.
  29. Wennberg JE,Fisher ES,Stukel TA,Skinner JS,Sharp SM,Bronner KK.Use of hospitals, physician visits, and hospice care during last six months of life among cohorts loyal to highly respected hospitals in the United States.BMJ2004;328:607.
  30. Lee SK,McMillan DD,Ohlsson A,Pendray M,Synnes A,Whyte R, et al.Variations in practice and outcomes in the Canadian NICU network: 1996‐1997.Pediatrics2000;106:10701079.
  31. Nelson DG,Leake J,Bradley J,Kuppermann N.Evaluation of febrile children with petechial rashes: is there consensus among pediatricians?Pediatr Infect Dis J1998;17:11351140.
  32. Plint AC,Johnson DW,Wiebe N,Bulloch B,Pusic M,Joubert G, et al.Practice variation among pediatric emergency departments in the treatment of bronchiolitis.Acad Emerg Med2004;11:353360.
  33. Thakker Y,Sheldon TA,Long R,MacFaul R.Paediatric inpatient utilisation in a district general hospital.Arch Dis Child1994;70:488492.
  34. Mahadevan M,Jin A,Manning P,Lim TK.Emergency department asthma: compliance with an evidence‐based management algorithm.Ann Acad Med Singapore2002;31:419424.
  35. Moyer VA,Gist AK,Elliott EJ.Is the practice of paediatric inpatient medicine evidence‐based?J Paediatr Child Health2002;38:347351.
Article PDF
Issue
Journal of Hospital Medicine - 3(4)
Publications
Page Number
292-298
Legacy Keywords
hospitalist, pediatric, variation, variability, evidence‐based medicine, research network
Sections
Article PDF
Article PDF

Reduction of undesirable variation in care has been a major focus of systematic efforts to improve the quality of the healthcare system.13 The emergence of hospitalists, physicians specializing in the care of hospitalized patients, was spurred by a desire to streamline care and reduce variability in hospital management of common diseases.4, 5 Over the past decade, hospitalist systems have become a leading vehicle for care delivery.4, 6, 7 It remains unclear, however, whether implementation of hospitalist systems has lessened undesirable variation in the inpatient management of common diseases.

While systematic reviews have found costs and hospital length of stay to be 10‐15% lower in both pediatric and internal medicine hospitalist systems, few studies have adequately assessed the processes or quality of care in hospitalist systems.8, 9 Two internal medicine studies have found decreased mortality in hospitalist systems, but the mechanism by which hospitalists apparently achieved these gains is unclear.10, 11 Even less is known about care processes or quality in pediatric hospitalist systems. Death is a rare occurrence in pediatric ward settings, and the seven studies conducted to date comparing pediatric hospitalist and traditional systems have been universally underpowered to detect differences in mortality.9, 1218 There is a need to better understand care processes as a first step in understanding and improving quality of care in hospitalist systems.19

The Pediatric Research in Inpatient Settings (PRIS) Network was formed to improve the quality of care for hospitalized children through collaborative clinical research. In this study, we sought to study variation in the care of common pediatric conditions among a cohort of pediatric hospitalists. We have previously reported that less variability exists in hospitalists' reported management of inpatient conditions than in the reported management of these same conditions by community‐based pediatricians,20 but we were concerned that substantial undesirable variation (ie, variation in practice due to uncertainty or unsubstantiated local practice traditions, rather than justified variation in care based on different risks of harms or benefits in different patients) may still exist among hospitalists. We therefore conducted a study: 1) to investigate variation in hospitalists' reported use of common inpatient therapies, and 2) to test the hypothesis that greater variation exists in hospitalists' reported use of inpatient therapies of unproven benefit than in those therapies proven to be beneficial.

METHODS

Survey Design and Administration

In 2003, we designed the PRIS Survey to collect data on hospitalists' backgrounds, practices, and training needs, as well as their management of common pediatric conditions. For the current study, we chose a priori to evaluate hospitalists' use of 14 therapies in the management of 4 common conditions: asthma, bronchiolitis, gastroenteritis, and gastro‐esophageal reflux disease (GERD) (Table 1). These four conditions were chosen for study because they were among the top discharge diagnoses (primary and secondary) from the inpatient services at 2 of the authors' institutions (Children's Hospital Boston and Children's Hospital San Diego) during the year before administration of the survey, and because a discrete set of therapeutic agents are commonly used in their management. Respondents were asked to report the frequency with which they used each of the 14 therapies of interest on 5‐point Likert scales (from 1=never to 5=almost always). The survey initially developed was piloted with a small group of hospitalists and pediatricians, and a final version incorporating revisions was subsequently administered to all pediatric hospitalists in the US and Canada identified through any of 3 sources: 1) the Pediatric Research in Inpatient Settings (PRIS) list of participants; 2) the Society for Hospital Medicine (SHM) pediatric hospital medicine e‐mail listserv; and 3) the list of all attendees of the first national pediatric hospitalist conference sponsored by the Ambulatory Pediatrics Association (APA), SHM, and American Academy of Pediatrics (AAP); this meeting was held in San Antonio, Texas, USA in November 2003. Individuals identified through more than 1 of these groups were counted only once. Potential participants were assured that individual responses would be kept confidential, and were e‐mailed an access code to participate in the online survey, using a secure web‐based interface; a paper‐based version was also made available to those who preferred to respond in this manner. Regular reminder notices were sent to all non‐responders. Further details regarding PRIS Survey recruitment and study methods have been published previously.20

Therapies and Conditions Studied
ConditionTherapyBMJ clinical evidence Treatment effect categorization*Study classification
  • Abbreviation: BMJ, British Medical Journal.

AsthmaInhaled albuterolBeneficialProven
 Systemic corticosteroidsBeneficialProven
 Inhaled ipratropium in the first 24 hours of hospitalizationBeneficialProven
 Inhaled ipratropium after the first 24 hours of hospitalizationUnknown effectivenessUnproven
BronchiolitisInhaled albuterolUnknown effectivenessUnproven
 Inhaled epinephrineUnknown effectivenessUnproven
 Systemic corticosteroidsUnknown effectivenessUnproven
GastroenteritisIntravenous hydrationBeneficialProven
 LactobacillusNot assessedUnproven
 OndansetronNot assessedUnproven
Gastro‐Esophageal Reflux Disease (GERD)H2 histamine‐receptor antagonistsUnknown effectivenessUnproven
 Thickened feedsUnknown effectiveness Likely to be beneficialUnproven Proven
 MetoclopramideUnknown effectivenessUnproven
 Proton‐pump inhibitorsUnknown effectivenessUnproven

DefinitionsReference Responses and Percent Variation

To measure variation in reported management, we first sought to determine a reference response for each therapy of interest. Since the evidence base for most of the therapies we studied is weak, it was not possible to determine a gold standard response for each therapy. Instead, we sought to measure the degree of divergence from a reference response for each therapy in the following manner. First, to simplify analyses, we collapsed our five‐category Likert scale into three categories (never/rarely, sometimes, and often/almost always). We then defined the reference response for each therapy to be never/rarely or often/almost always, whichever of the 2 was more frequently selected by respondents; sometimes was not used as a reference category, as reporting use of a particular therapy sometimes indicated substantial variability even within an individual's own practice.

Classification of therapies as proven or unproven.

To classify each of the 14 studied therapies as being of proven or unproven, we used the British Medical Journal's publication Clinical Evidence.19 We chose to use Clinical Evidence as an evidence‐based reference because it provides rigorously developed, systematic analyses of therapeutic management options for multiple common pediatric conditions, and organizes recommendations in a straightforward manner. Four of the 14 therapies had been determined on systematic review to be proven beneficial at the time of study design: systemic corticosteroids, inhaled albuterol, and ipratropium (in the first 24 h) in the care of children with asthma; and IV hydration in the care of children with acute gastroenteritis. The remaining 10 therapies were either considered to be of unknown effectiveness or had not been formally evaluated by Clinical Evidence, and were hence considered unproven for this study (Table 1). Of note, the use of thickened feeds in the treatment of children with GERD had been determined to be of unknown effectiveness at the time of study design, but was reclassified as likely to be beneficial during the course of the study.

Analyses

Descriptive statistics were used to report respondents' demographic characteristics and work environments, as well as variation in their reported use of each of the 14 therapies. Variation in hospitalists' use of proven versus unproven therapies was compared using the Wilcoxon rank sum test, as it was distributed non‐normally. For our primary analysis, the use of thickened feeds in GERD was considered unproven, but a sensitivity analysis was conducted reclassifying it as proven in light of the evolving literature on its use and its consequent reclassification in Clinical Evidence.(SAS Version 9.1, Cary, NC) was used for statistical analyses.

RESULTS

213 of the 320 individuals identified through the 3 lists of pediatric hospitalists (67%) responded to the survey. Of these, 198 (93%) identified themselves as hospitalists and were therefore included. As previously reported,20 53% of respondents were male, 55% worked in academic training environments, and 47% had completed advanced training (fellowship) beyond their core pediatric training (residency training); respondents reported completing residency training 11 9 (mean, standard deviation) years prior to the survey, and spending 176 72 days per year in the care of hospitalized patients.

Variation in reported management: asthma

(Figure 1, Panel A). Relatively little variation existed in reported use of the 4 asthma therapies studied. Only 4.4% (95% CI, 1.4‐7.4%) of respondents did not provide the reference response of using inhaled albuterol often or almost always in the care of inpatients with asthma, and only 6.0% (2.5‐9.5%) of respondents did not report using systemic corticosteroids often or almost always. Variation in reported use of ipratropium was somewhat higher.

Figure 1
Percent variation in reported use of common inpatient therapies. (T bars indicate 95% confidence intervals).

Bronchiolitis

(Figure 1B). By contrast, variation in reported use of inhaled therapies for bronchiolitis was high, with many respondents reporting that they often or always used inhaled albuterol or epinephrine, while many others reported rarely or never using them. There was 59.6% (52.4‐66.8%) variation from the reference response of often/almost always using inhaled albuterol, and 72.2% (65.6‐78.8%) variation from the reference response of never/rarely using inhaled epinephrine. Only 11.6% (6.9‐16.3%) of respondents, however, varied from the reference response of using dexamethasone more than rarely in the care of children with bronchiolitis.

Gastroenteritis

(Figure 1C). Moderate variability existed in the reported use of the 3 studied therapies for children hospitalized with gastroenteritis. 21.1% (15.1‐27.1%) of respondents did not provide the reference response of often/almost always using IV hydration; 35.9% (28.9‐42.9%) did not provide the reference response of never or rarely using lactobacillus; likewise, 35.9% (28.9‐42.9%) did not provide the reference response of never or rarely using ondansetron.

Gastro‐Esophageal Reflux Disease

(Figure 1, Panel D). There was moderate to high variability in the reported management of GERD. 22.8% (16.7‐28.9%) of respondents did not provide the reference response of often/almost always using H2 antagonists, and 44.9% (37.6‐52.2%) did not report often/almost always using thickened feeds in the care of these children. 58.3% (51.1‐65.5%) and 72.1% (65.5‐78.7%) of respondents did not provide the reference response of never/rarely using metoclopramide and proton pump inhibitors, respectively.

Proven vs. Unproven Therapies

(Figure 2). Variation in reported use of therapies of unproven benefit was significantly higher than variation in reported use of the 4 proven therapies (albuterol, corticosteroids, and ipratropium in the first 24 h for asthma; IV re‐hydration for gastroenteritis). The mean variation in reported use of unproven therapies was 44.6 20.5%, compared with 15.5 12.5% variation in reported use of therapies of proven benefit (p = 0.02).

Figure 2
Variation in reported use of proven versus unproven therapies (T bars indicate standard deviations).

As a sensitivity analysis, the use of thickened feeds as a therapy for GERD was re‐categorized as proven and the above analysis repeated, for the reasons outlined in the methods section. This did not alter the identified relationship between variability and the evidence base fundamentally; hospitalists' reported variation in use of therapies of unproven benefit in this sensitivity analysis was 44.6 21.7%, compared with 21.4 17.0% variation in reported use of proven therapies (p = 0.05).

DISCUSSION

Substantial variation exists in the inpatient management of common pediatric diseases. Although we have previously found less reported variability in pediatric hospitalists' practices than in those of community‐based pediatricians,20 the current study demonstrates a high degree of reported variation even among a cohort of inpatient specialists. Importantly, however, reported variation was found to be significantly less for those inpatient therapies supported by a robust evidence base.

Bronchiolitis, gastroenteritis, asthma, and GERD are extremely common causes of pediatric hospitalization throughout the developed world.2125 Our finding of high reported variability in the routine care of all of these conditions except asthma is concerning, as it suggests that experts do not agree on how to manage children hospitalized with even the most common childhood diseases. While we hypothesized that there would be some variation in the use of therapies whose benefit has not been well established, the high degree of variation observed is of concern because it indicates that an insufficient evidentiary base exists to support much of our day‐to‐day practice. Some variation in practice in response to differing clinical presentations is both expected and desirable, but it is remarkable that variance in practice was significantly less for the most evidence‐based therapies than for those grounded less firmly in science, suggesting that the variation identified here is not justifiable variation based on appropriate responses to atypical clinical presentations, but uncertainty in the absence of clear data. Such undesired variability may decrease system reliability (introducing avoidable opportunity for error),26 and lead to under‐use of needed therapies as well as overuse of unnecessary therapies.1

Our work extends prior research that has identified wide variation in patterns of hospital admission, use of hospital resources, and processes of inpatient care,2732 by documenting reported variation in the use of common inpatient therapies. Rates of hospital admission may vary by as much as 7‐fold across regions.33 Our study demonstrates that wide variation exists not only in admission rates, but in reported inpatient care processes for some of the most common diseases seen in pediatric hospitals. Our study also supports the hypothesis that variation in care may be driven by gaps in knowledge.32 Among hospitalists, we found the strength of the evidence base to be a major determinant of reported variability.

Our study has several limitations. First, the data presented here are derived from provider self‐reports, which may not fully reflect actual practice. In the case of the few proven therapies studied, reporting bias could lead to an over‐reporting of adherence to evidence‐based standards of care. Like our study, however, prior studies have found that hospital‐based providers fairly consistently comply with evidence‐based practice recommendations for acute asthma care,34, 35 supporting our finding that variation in acute asthma care (which represented 3 of our 4 proven therapies) is low in this setting.

Another limitation is that classifications of therapies as proven or unproven change as the evidence base evolves. Of particular relevance to this study, the use of thickened feeds as a therapy for GERD, originally classified as being of unknown effectiveness, was reclassified by Clinical Evidence during the course of the study as likely to be beneficial. The relationship we identified between proven therapies and degree of variability in care did not change when we conducted a sensitivity analysis re‐categorizing this therapy as proven, but precisely quantifying variation is complicated by continuous changes in the state of the evidence.

Pediatric hospitalist systems have been found consistently to improve the efficiency of care,9 yet this study suggests that considerable variation in hospitalists' management of key conditions remains. The Pediatric Research in Inpatient Settings (PRIS) Network was formed in 2002 to improve the care of hospitalized children and the quality of inpatient practice by developing an evidence base for inpatient pediatric care. Ongoing multi‐center research efforts through PRIS and other research networks are beginning to critically evaluate therapies used in the management of common pediatric conditions. Rigorous studies of the processes and outcomes of pediatric hospital care will inform inpatient pediatric practice, and ultimately improve the care of hospitalized children. The current study strongly affirms the urgent need to establish such an evidence base. Without data to inform optimal care, efforts to reduce undesirable variation in care and improve care quality cannot be fully realized.

Acknowledgements

The authors would like to extend their thanks to the hospitalists and members of the Pediatric Research in Inpatient Settings Network who participated in this research, as well as the Children's National Medical Center and Children's Hospital Boston Inpatient Pediatrics Services, who provided funding to support this study. Special thanks to the Ambulatory Pediatrics Association (APA), for its core support of the PRIS Network. Dr. Landrigan is the recipient of a career development award from the Agency for Healthcare Research and Quality (AHRQ K08 HS13333). Dr. Conway is the recipient of a Robert Wood Johnson Clinical Scholars Grant. All researchers were independent from the funding agencies; the academic medical centers named above, APA, and AHRQ had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; or preparation, review, or approval of the manuscript.

Reduction of undesirable variation in care has been a major focus of systematic efforts to improve the quality of the healthcare system.13 The emergence of hospitalists, physicians specializing in the care of hospitalized patients, was spurred by a desire to streamline care and reduce variability in hospital management of common diseases.4, 5 Over the past decade, hospitalist systems have become a leading vehicle for care delivery.4, 6, 7 It remains unclear, however, whether implementation of hospitalist systems has lessened undesirable variation in the inpatient management of common diseases.

While systematic reviews have found costs and hospital length of stay to be 10‐15% lower in both pediatric and internal medicine hospitalist systems, few studies have adequately assessed the processes or quality of care in hospitalist systems.8, 9 Two internal medicine studies have found decreased mortality in hospitalist systems, but the mechanism by which hospitalists apparently achieved these gains is unclear.10, 11 Even less is known about care processes or quality in pediatric hospitalist systems. Death is a rare occurrence in pediatric ward settings, and the seven studies conducted to date comparing pediatric hospitalist and traditional systems have been universally underpowered to detect differences in mortality.9, 1218 There is a need to better understand care processes as a first step in understanding and improving quality of care in hospitalist systems.19

The Pediatric Research in Inpatient Settings (PRIS) Network was formed to improve the quality of care for hospitalized children through collaborative clinical research. In this study, we sought to study variation in the care of common pediatric conditions among a cohort of pediatric hospitalists. We have previously reported that less variability exists in hospitalists' reported management of inpatient conditions than in the reported management of these same conditions by community‐based pediatricians,20 but we were concerned that substantial undesirable variation (ie, variation in practice due to uncertainty or unsubstantiated local practice traditions, rather than justified variation in care based on different risks of harms or benefits in different patients) may still exist among hospitalists. We therefore conducted a study: 1) to investigate variation in hospitalists' reported use of common inpatient therapies, and 2) to test the hypothesis that greater variation exists in hospitalists' reported use of inpatient therapies of unproven benefit than in those therapies proven to be beneficial.

METHODS

Survey Design and Administration

In 2003, we designed the PRIS Survey to collect data on hospitalists' backgrounds, practices, and training needs, as well as their management of common pediatric conditions. For the current study, we chose a priori to evaluate hospitalists' use of 14 therapies in the management of 4 common conditions: asthma, bronchiolitis, gastroenteritis, and gastro‐esophageal reflux disease (GERD) (Table 1). These four conditions were chosen for study because they were among the top discharge diagnoses (primary and secondary) from the inpatient services at 2 of the authors' institutions (Children's Hospital Boston and Children's Hospital San Diego) during the year before administration of the survey, and because a discrete set of therapeutic agents are commonly used in their management. Respondents were asked to report the frequency with which they used each of the 14 therapies of interest on 5‐point Likert scales (from 1=never to 5=almost always). The survey initially developed was piloted with a small group of hospitalists and pediatricians, and a final version incorporating revisions was subsequently administered to all pediatric hospitalists in the US and Canada identified through any of 3 sources: 1) the Pediatric Research in Inpatient Settings (PRIS) list of participants; 2) the Society for Hospital Medicine (SHM) pediatric hospital medicine e‐mail listserv; and 3) the list of all attendees of the first national pediatric hospitalist conference sponsored by the Ambulatory Pediatrics Association (APA), SHM, and American Academy of Pediatrics (AAP); this meeting was held in San Antonio, Texas, USA in November 2003. Individuals identified through more than 1 of these groups were counted only once. Potential participants were assured that individual responses would be kept confidential, and were e‐mailed an access code to participate in the online survey, using a secure web‐based interface; a paper‐based version was also made available to those who preferred to respond in this manner. Regular reminder notices were sent to all non‐responders. Further details regarding PRIS Survey recruitment and study methods have been published previously.20

Therapies and Conditions Studied
ConditionTherapyBMJ clinical evidence Treatment effect categorization*Study classification
  • Abbreviation: BMJ, British Medical Journal.

AsthmaInhaled albuterolBeneficialProven
 Systemic corticosteroidsBeneficialProven
 Inhaled ipratropium in the first 24 hours of hospitalizationBeneficialProven
 Inhaled ipratropium after the first 24 hours of hospitalizationUnknown effectivenessUnproven
BronchiolitisInhaled albuterolUnknown effectivenessUnproven
 Inhaled epinephrineUnknown effectivenessUnproven
 Systemic corticosteroidsUnknown effectivenessUnproven
GastroenteritisIntravenous hydrationBeneficialProven
 LactobacillusNot assessedUnproven
 OndansetronNot assessedUnproven
Gastro‐Esophageal Reflux Disease (GERD)H2 histamine‐receptor antagonistsUnknown effectivenessUnproven
 Thickened feedsUnknown effectiveness Likely to be beneficialUnproven Proven
 MetoclopramideUnknown effectivenessUnproven
 Proton‐pump inhibitorsUnknown effectivenessUnproven

DefinitionsReference Responses and Percent Variation

To measure variation in reported management, we first sought to determine a reference response for each therapy of interest. Since the evidence base for most of the therapies we studied is weak, it was not possible to determine a gold standard response for each therapy. Instead, we sought to measure the degree of divergence from a reference response for each therapy in the following manner. First, to simplify analyses, we collapsed our five‐category Likert scale into three categories (never/rarely, sometimes, and often/almost always). We then defined the reference response for each therapy to be never/rarely or often/almost always, whichever of the 2 was more frequently selected by respondents; sometimes was not used as a reference category, as reporting use of a particular therapy sometimes indicated substantial variability even within an individual's own practice.

Classification of therapies as proven or unproven.

To classify each of the 14 studied therapies as being of proven or unproven, we used the British Medical Journal's publication Clinical Evidence.19 We chose to use Clinical Evidence as an evidence‐based reference because it provides rigorously developed, systematic analyses of therapeutic management options for multiple common pediatric conditions, and organizes recommendations in a straightforward manner. Four of the 14 therapies had been determined on systematic review to be proven beneficial at the time of study design: systemic corticosteroids, inhaled albuterol, and ipratropium (in the first 24 h) in the care of children with asthma; and IV hydration in the care of children with acute gastroenteritis. The remaining 10 therapies were either considered to be of unknown effectiveness or had not been formally evaluated by Clinical Evidence, and were hence considered unproven for this study (Table 1). Of note, the use of thickened feeds in the treatment of children with GERD had been determined to be of unknown effectiveness at the time of study design, but was reclassified as likely to be beneficial during the course of the study.

Analyses

Descriptive statistics were used to report respondents' demographic characteristics and work environments, as well as variation in their reported use of each of the 14 therapies. Variation in hospitalists' use of proven versus unproven therapies was compared using the Wilcoxon rank sum test, as it was distributed non‐normally. For our primary analysis, the use of thickened feeds in GERD was considered unproven, but a sensitivity analysis was conducted reclassifying it as proven in light of the evolving literature on its use and its consequent reclassification in Clinical Evidence.(SAS Version 9.1, Cary, NC) was used for statistical analyses.

RESULTS

213 of the 320 individuals identified through the 3 lists of pediatric hospitalists (67%) responded to the survey. Of these, 198 (93%) identified themselves as hospitalists and were therefore included. As previously reported,20 53% of respondents were male, 55% worked in academic training environments, and 47% had completed advanced training (fellowship) beyond their core pediatric training (residency training); respondents reported completing residency training 11 9 (mean, standard deviation) years prior to the survey, and spending 176 72 days per year in the care of hospitalized patients.

Variation in reported management: asthma

(Figure 1, Panel A). Relatively little variation existed in reported use of the 4 asthma therapies studied. Only 4.4% (95% CI, 1.4‐7.4%) of respondents did not provide the reference response of using inhaled albuterol often or almost always in the care of inpatients with asthma, and only 6.0% (2.5‐9.5%) of respondents did not report using systemic corticosteroids often or almost always. Variation in reported use of ipratropium was somewhat higher.

Figure 1
Percent variation in reported use of common inpatient therapies. (T bars indicate 95% confidence intervals).

Bronchiolitis

(Figure 1B). By contrast, variation in reported use of inhaled therapies for bronchiolitis was high, with many respondents reporting that they often or always used inhaled albuterol or epinephrine, while many others reported rarely or never using them. There was 59.6% (52.4‐66.8%) variation from the reference response of often/almost always using inhaled albuterol, and 72.2% (65.6‐78.8%) variation from the reference response of never/rarely using inhaled epinephrine. Only 11.6% (6.9‐16.3%) of respondents, however, varied from the reference response of using dexamethasone more than rarely in the care of children with bronchiolitis.

Gastroenteritis

(Figure 1C). Moderate variability existed in the reported use of the 3 studied therapies for children hospitalized with gastroenteritis. 21.1% (15.1‐27.1%) of respondents did not provide the reference response of often/almost always using IV hydration; 35.9% (28.9‐42.9%) did not provide the reference response of never or rarely using lactobacillus; likewise, 35.9% (28.9‐42.9%) did not provide the reference response of never or rarely using ondansetron.

Gastro‐Esophageal Reflux Disease

(Figure 1, Panel D). There was moderate to high variability in the reported management of GERD. 22.8% (16.7‐28.9%) of respondents did not provide the reference response of often/almost always using H2 antagonists, and 44.9% (37.6‐52.2%) did not report often/almost always using thickened feeds in the care of these children. 58.3% (51.1‐65.5%) and 72.1% (65.5‐78.7%) of respondents did not provide the reference response of never/rarely using metoclopramide and proton pump inhibitors, respectively.

Proven vs. Unproven Therapies

(Figure 2). Variation in reported use of therapies of unproven benefit was significantly higher than variation in reported use of the 4 proven therapies (albuterol, corticosteroids, and ipratropium in the first 24 h for asthma; IV re‐hydration for gastroenteritis). The mean variation in reported use of unproven therapies was 44.6 20.5%, compared with 15.5 12.5% variation in reported use of therapies of proven benefit (p = 0.02).

Figure 2
Variation in reported use of proven versus unproven therapies (T bars indicate standard deviations).

As a sensitivity analysis, the use of thickened feeds as a therapy for GERD was re‐categorized as proven and the above analysis repeated, for the reasons outlined in the methods section. This did not alter the identified relationship between variability and the evidence base fundamentally; hospitalists' reported variation in use of therapies of unproven benefit in this sensitivity analysis was 44.6 21.7%, compared with 21.4 17.0% variation in reported use of proven therapies (p = 0.05).

DISCUSSION

Substantial variation exists in the inpatient management of common pediatric diseases. Although we have previously found less reported variability in pediatric hospitalists' practices than in those of community‐based pediatricians,20 the current study demonstrates a high degree of reported variation even among a cohort of inpatient specialists. Importantly, however, reported variation was found to be significantly less for those inpatient therapies supported by a robust evidence base.

Bronchiolitis, gastroenteritis, asthma, and GERD are extremely common causes of pediatric hospitalization throughout the developed world.2125 Our finding of high reported variability in the routine care of all of these conditions except asthma is concerning, as it suggests that experts do not agree on how to manage children hospitalized with even the most common childhood diseases. While we hypothesized that there would be some variation in the use of therapies whose benefit has not been well established, the high degree of variation observed is of concern because it indicates that an insufficient evidentiary base exists to support much of our day‐to‐day practice. Some variation in practice in response to differing clinical presentations is both expected and desirable, but it is remarkable that variance in practice was significantly less for the most evidence‐based therapies than for those grounded less firmly in science, suggesting that the variation identified here is not justifiable variation based on appropriate responses to atypical clinical presentations, but uncertainty in the absence of clear data. Such undesired variability may decrease system reliability (introducing avoidable opportunity for error),26 and lead to under‐use of needed therapies as well as overuse of unnecessary therapies.1

Our work extends prior research that has identified wide variation in patterns of hospital admission, use of hospital resources, and processes of inpatient care,2732 by documenting reported variation in the use of common inpatient therapies. Rates of hospital admission may vary by as much as 7‐fold across regions.33 Our study demonstrates that wide variation exists not only in admission rates, but in reported inpatient care processes for some of the most common diseases seen in pediatric hospitals. Our study also supports the hypothesis that variation in care may be driven by gaps in knowledge.32 Among hospitalists, we found the strength of the evidence base to be a major determinant of reported variability.

Our study has several limitations. First, the data presented here are derived from provider self‐reports, which may not fully reflect actual practice. In the case of the few proven therapies studied, reporting bias could lead to an over‐reporting of adherence to evidence‐based standards of care. Like our study, however, prior studies have found that hospital‐based providers fairly consistently comply with evidence‐based practice recommendations for acute asthma care,34, 35 supporting our finding that variation in acute asthma care (which represented 3 of our 4 proven therapies) is low in this setting.

Another limitation is that classifications of therapies as proven or unproven change as the evidence base evolves. Of particular relevance to this study, the use of thickened feeds as a therapy for GERD, originally classified as being of unknown effectiveness, was reclassified by Clinical Evidence during the course of the study as likely to be beneficial. The relationship we identified between proven therapies and degree of variability in care did not change when we conducted a sensitivity analysis re‐categorizing this therapy as proven, but precisely quantifying variation is complicated by continuous changes in the state of the evidence.

Pediatric hospitalist systems have been found consistently to improve the efficiency of care,9 yet this study suggests that considerable variation in hospitalists' management of key conditions remains. The Pediatric Research in Inpatient Settings (PRIS) Network was formed in 2002 to improve the care of hospitalized children and the quality of inpatient practice by developing an evidence base for inpatient pediatric care. Ongoing multi‐center research efforts through PRIS and other research networks are beginning to critically evaluate therapies used in the management of common pediatric conditions. Rigorous studies of the processes and outcomes of pediatric hospital care will inform inpatient pediatric practice, and ultimately improve the care of hospitalized children. The current study strongly affirms the urgent need to establish such an evidence base. Without data to inform optimal care, efforts to reduce undesirable variation in care and improve care quality cannot be fully realized.

Acknowledgements

The authors would like to extend their thanks to the hospitalists and members of the Pediatric Research in Inpatient Settings Network who participated in this research, as well as the Children's National Medical Center and Children's Hospital Boston Inpatient Pediatrics Services, who provided funding to support this study. Special thanks to the Ambulatory Pediatrics Association (APA), for its core support of the PRIS Network. Dr. Landrigan is the recipient of a career development award from the Agency for Healthcare Research and Quality (AHRQ K08 HS13333). Dr. Conway is the recipient of a Robert Wood Johnson Clinical Scholars Grant. All researchers were independent from the funding agencies; the academic medical centers named above, APA, and AHRQ had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; or preparation, review, or approval of the manuscript.

References
  1. Institute of Medicine: Crossing the Quality Chasm: A New Health System for the 21st Century.Washington, D.C.:National Academic Press,2001.
  2. Urbach DR.Baxter NN.Reducing variation in surgical care.BMJ2005;330:14011402.
  3. Sedrakyan A,van der MJ,Lewsey J,Treasure T.Variation in use of video assisted thoracic surgery in the United Kingdom.BMJ2004;329:10111012.
  4. Wachter RM,.Goldman L.The emerging role of “hospitalists” in the American health care system.N. Engl J Med1996;335:514517.
  5. Maviglia SM,.Bates D.Hospitalism in the USA.Lancet1999;353:1902.
  6. Society of Hospital Medicine. Growth of Hospital Medicine Nationwide. Available at: http://www.hospitalmedicine.org/Content/NavigationMenu/Media/GrowthofHospitalMedicineNationwide/Growth_of_Hospital_M.htm. Accessed April 11,2007.
  7. Terry K.The changing face of hospital practice.Med Econ2002;79:7279.
  8. Wachter RM,.Goldman L.The hospitalist movement 5 years later.JAMA2002;287:487494.
  9. Landrigan CP,Conway PH,Edwards S,Srivastava R.Pediatric hospitalists: a systematic review of the literature.Pediatrics2006;117:17361744.
  10. Auerbach AD,Wachter RM,Katz P,Showstack J,Baron RB,Goldman L.Implementation of a voluntary hospitalist service at a community teaching hospital: improved clinical efficiency and patient outcomes.Ann Intern Med2002;137:859865.
  11. Meltzer D,Manning WG,Morrison J,Shah MN,Jin L,Guth T, et al.Effects of physician experience on costs and outcomes on an academic general medicine service: results of a trial of hospitalists.Ann Intern Med2002;137:866874.
  12. Bellet PS,Whitaker RC.Evaluation of a pediatric hospitalist service: impact on length of stay and hospital charges.Pediatrics2000;105:478484.
  13. Landrigan C,Srivastava R,Muret‐Wagstaff S,Soumerai SB,Ross‐Degnan D,Graef JW,Homer CJ, and Goldmann DA.Impact of an HMO hospitalist system in academic pediatrics.Pediatrics2002;110:720728.
  14. Maggioni A,Reyes M, and Lifshitz F.Evaluation of a pediatric hospitalist service by APR‐DRG's: impact on length of stay and hospital charges.Pediatr Research2001;49(suppl),691.
  15. Wells RD,Dahl B,Wilson SD.Pediatric hospitalists: quality care for the underserved?Am J Med Qual2001;16:174180.
  16. Ogershok PR,Li X,Palmer HC,Moore RS,Weisse ME,Ferrari ND.Restructuring an academic pediatric inpatient service using concepts developed by hospitalists.Clin Pediatr (Phila)2001;40:653660.
  17. Srivastava R,Muret‐Wagstaff S,Young PC, and James BC.Hospitalist care of medically complex children.Pediatr Research2004;55(suppl),1789.
  18. Seid M,Quinn K,Kurtin PS.Hospital‐based and community pediatricians: comparing outcomes for asthma and bronchiolitis.J Clin Outcomes Manage1997;4:2124.
  19. Godlee F,Tovey D,Bedford M, et al., eds.Clinical Evidence: The International Source of the Best Available Evidence for Effective Health Care.London, United Kingdom:BMJ Publishing Group;2004.
  20. Conway PH,Edwards S,Stucky ER,Chiang VW,Ottolini MC,Landrigan CP.Variations in management of common inpatient pediatric illnesses: hospitalists and community pediatricians.Pediatrics2006;118:441447.
  21. Muller‐Pebody B,Edmunds WJ,Zambon MC,Gay NJ,Crowcroft NS.Contribution of RSV to bronchiolitis and pneumonia‐associated hospitalizations in English children, April 1995‐March 1998.Epidemiol Infect2002;129:99106.
  22. Pelletier AJ,Mansbach JM,Camargo CA.Direct medical costs of bronchiolitis hospitalizations in the United States.Pediatrics2006;118:24182423.
  23. Van Damme P,Giaquinto C,Huet F,Gothefors L,Maxwell M,Van der WM.Multicenter Prospective Study of the Burden of Rotavirus Acute Gastroenteritis in Europe, 2004‐2005: The REVEAL Study.J Infect Dis2007;195Suppl 1:S4S16.
  24. Akinbami L.The state of childhood asthma, United States, 1980‐2005.Adv.Data.2006;124.
  25. Gold BD,Freston JW.Gastroesophageal reflux in children: pathogenesis, prevalence, diagnosis, and role of proton pump inhibitors in treatment.Paediatr Drugs2002;4:673685.
  26. Luria JW,Muething SE,Schoettker PJ,Kotagal UR.Reliability science and patient safety.Pediatr Clin North Am2006;53:11211133.
  27. Wennberg JE and McAndrew Cooper M, eds.The Dartmouth Atlas of Health Care in the United States.Hanover, NH, USA:Health Forum, Inc.,1999.
  28. Perrin JM,Homer CJ,Berwick DM,Woolf AD,Freeman JL,Wennberg JE.Variations in rates of hospitalization of children in three urban communities.N Engl J Med1989;320:11831187.
  29. Wennberg JE,Fisher ES,Stukel TA,Skinner JS,Sharp SM,Bronner KK.Use of hospitals, physician visits, and hospice care during last six months of life among cohorts loyal to highly respected hospitals in the United States.BMJ2004;328:607.
  30. Lee SK,McMillan DD,Ohlsson A,Pendray M,Synnes A,Whyte R, et al.Variations in practice and outcomes in the Canadian NICU network: 1996‐1997.Pediatrics2000;106:10701079.
  31. Nelson DG,Leake J,Bradley J,Kuppermann N.Evaluation of febrile children with petechial rashes: is there consensus among pediatricians?Pediatr Infect Dis J1998;17:11351140.
  32. Plint AC,Johnson DW,Wiebe N,Bulloch B,Pusic M,Joubert G, et al.Practice variation among pediatric emergency departments in the treatment of bronchiolitis.Acad Emerg Med2004;11:353360.
  33. Thakker Y,Sheldon TA,Long R,MacFaul R.Paediatric inpatient utilisation in a district general hospital.Arch Dis Child1994;70:488492.
  34. Mahadevan M,Jin A,Manning P,Lim TK.Emergency department asthma: compliance with an evidence‐based management algorithm.Ann Acad Med Singapore2002;31:419424.
  35. Moyer VA,Gist AK,Elliott EJ.Is the practice of paediatric inpatient medicine evidence‐based?J Paediatr Child Health2002;38:347351.
References
  1. Institute of Medicine: Crossing the Quality Chasm: A New Health System for the 21st Century.Washington, D.C.:National Academic Press,2001.
  2. Urbach DR.Baxter NN.Reducing variation in surgical care.BMJ2005;330:14011402.
  3. Sedrakyan A,van der MJ,Lewsey J,Treasure T.Variation in use of video assisted thoracic surgery in the United Kingdom.BMJ2004;329:10111012.
  4. Wachter RM,.Goldman L.The emerging role of “hospitalists” in the American health care system.N. Engl J Med1996;335:514517.
  5. Maviglia SM,.Bates D.Hospitalism in the USA.Lancet1999;353:1902.
  6. Society of Hospital Medicine. Growth of Hospital Medicine Nationwide. Available at: http://www.hospitalmedicine.org/Content/NavigationMenu/Media/GrowthofHospitalMedicineNationwide/Growth_of_Hospital_M.htm. Accessed April 11,2007.
  7. Terry K.The changing face of hospital practice.Med Econ2002;79:7279.
  8. Wachter RM,.Goldman L.The hospitalist movement 5 years later.JAMA2002;287:487494.
  9. Landrigan CP,Conway PH,Edwards S,Srivastava R.Pediatric hospitalists: a systematic review of the literature.Pediatrics2006;117:17361744.
  10. Auerbach AD,Wachter RM,Katz P,Showstack J,Baron RB,Goldman L.Implementation of a voluntary hospitalist service at a community teaching hospital: improved clinical efficiency and patient outcomes.Ann Intern Med2002;137:859865.
  11. Meltzer D,Manning WG,Morrison J,Shah MN,Jin L,Guth T, et al.Effects of physician experience on costs and outcomes on an academic general medicine service: results of a trial of hospitalists.Ann Intern Med2002;137:866874.
  12. Bellet PS,Whitaker RC.Evaluation of a pediatric hospitalist service: impact on length of stay and hospital charges.Pediatrics2000;105:478484.
  13. Landrigan C,Srivastava R,Muret‐Wagstaff S,Soumerai SB,Ross‐Degnan D,Graef JW,Homer CJ, and Goldmann DA.Impact of an HMO hospitalist system in academic pediatrics.Pediatrics2002;110:720728.
  14. Maggioni A,Reyes M, and Lifshitz F.Evaluation of a pediatric hospitalist service by APR‐DRG's: impact on length of stay and hospital charges.Pediatr Research2001;49(suppl),691.
  15. Wells RD,Dahl B,Wilson SD.Pediatric hospitalists: quality care for the underserved?Am J Med Qual2001;16:174180.
  16. Ogershok PR,Li X,Palmer HC,Moore RS,Weisse ME,Ferrari ND.Restructuring an academic pediatric inpatient service using concepts developed by hospitalists.Clin Pediatr (Phila)2001;40:653660.
  17. Srivastava R,Muret‐Wagstaff S,Young PC, and James BC.Hospitalist care of medically complex children.Pediatr Research2004;55(suppl),1789.
  18. Seid M,Quinn K,Kurtin PS.Hospital‐based and community pediatricians: comparing outcomes for asthma and bronchiolitis.J Clin Outcomes Manage1997;4:2124.
  19. Godlee F,Tovey D,Bedford M, et al., eds.Clinical Evidence: The International Source of the Best Available Evidence for Effective Health Care.London, United Kingdom:BMJ Publishing Group;2004.
  20. Conway PH,Edwards S,Stucky ER,Chiang VW,Ottolini MC,Landrigan CP.Variations in management of common inpatient pediatric illnesses: hospitalists and community pediatricians.Pediatrics2006;118:441447.
  21. Muller‐Pebody B,Edmunds WJ,Zambon MC,Gay NJ,Crowcroft NS.Contribution of RSV to bronchiolitis and pneumonia‐associated hospitalizations in English children, April 1995‐March 1998.Epidemiol Infect2002;129:99106.
  22. Pelletier AJ,Mansbach JM,Camargo CA.Direct medical costs of bronchiolitis hospitalizations in the United States.Pediatrics2006;118:24182423.
  23. Van Damme P,Giaquinto C,Huet F,Gothefors L,Maxwell M,Van der WM.Multicenter Prospective Study of the Burden of Rotavirus Acute Gastroenteritis in Europe, 2004‐2005: The REVEAL Study.J Infect Dis2007;195Suppl 1:S4S16.
  24. Akinbami L.The state of childhood asthma, United States, 1980‐2005.Adv.Data.2006;124.
  25. Gold BD,Freston JW.Gastroesophageal reflux in children: pathogenesis, prevalence, diagnosis, and role of proton pump inhibitors in treatment.Paediatr Drugs2002;4:673685.
  26. Luria JW,Muething SE,Schoettker PJ,Kotagal UR.Reliability science and patient safety.Pediatr Clin North Am2006;53:11211133.
  27. Wennberg JE and McAndrew Cooper M, eds.The Dartmouth Atlas of Health Care in the United States.Hanover, NH, USA:Health Forum, Inc.,1999.
  28. Perrin JM,Homer CJ,Berwick DM,Woolf AD,Freeman JL,Wennberg JE.Variations in rates of hospitalization of children in three urban communities.N Engl J Med1989;320:11831187.
  29. Wennberg JE,Fisher ES,Stukel TA,Skinner JS,Sharp SM,Bronner KK.Use of hospitals, physician visits, and hospice care during last six months of life among cohorts loyal to highly respected hospitals in the United States.BMJ2004;328:607.
  30. Lee SK,McMillan DD,Ohlsson A,Pendray M,Synnes A,Whyte R, et al.Variations in practice and outcomes in the Canadian NICU network: 1996‐1997.Pediatrics2000;106:10701079.
  31. Nelson DG,Leake J,Bradley J,Kuppermann N.Evaluation of febrile children with petechial rashes: is there consensus among pediatricians?Pediatr Infect Dis J1998;17:11351140.
  32. Plint AC,Johnson DW,Wiebe N,Bulloch B,Pusic M,Joubert G, et al.Practice variation among pediatric emergency departments in the treatment of bronchiolitis.Acad Emerg Med2004;11:353360.
  33. Thakker Y,Sheldon TA,Long R,MacFaul R.Paediatric inpatient utilisation in a district general hospital.Arch Dis Child1994;70:488492.
  34. Mahadevan M,Jin A,Manning P,Lim TK.Emergency department asthma: compliance with an evidence‐based management algorithm.Ann Acad Med Singapore2002;31:419424.
  35. Moyer VA,Gist AK,Elliott EJ.Is the practice of paediatric inpatient medicine evidence‐based?J Paediatr Child Health2002;38:347351.
Issue
Journal of Hospital Medicine - 3(4)
Issue
Journal of Hospital Medicine - 3(4)
Page Number
292-298
Page Number
292-298
Publications
Publications
Article Type
Display Headline
Variation in pediatric hospitalists' use of proven and unproven therapies: A study from the Pediatric Research in Inpatient Settings (PRIS) network
Display Headline
Variation in pediatric hospitalists' use of proven and unproven therapies: A study from the Pediatric Research in Inpatient Settings (PRIS) network
Legacy Keywords
hospitalist, pediatric, variation, variability, evidence‐based medicine, research network
Legacy Keywords
hospitalist, pediatric, variation, variability, evidence‐based medicine, research network
Sections
Article Source

Copyright © 2008 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Brigham and Women's Hospital, 221 Longwood Ave., 4th floor, Boston, MA 02115
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media