Given name(s)
Charles M.
Family name
LoPresti
Degrees
MD

Point-of-Care Ultrasound for Hospitalists: A Position Statement of the Society of Hospital Medicine

Article Type
Changed
Wed, 03/17/2021 - 08:24

Many hospitalists incorporate point-of-care ultrasound (POCUS) into their daily practice because it adds value to their bedside evaluation of patients. However, standards for training and assessing hospitalists in POCUS have not yet been established. Other acute care specialties, including emergency medicine and critical care medicine, have already incorporated POCUS into their graduate medical education training programs, but most internal medicine residency programs are only beginning to provide POCUS training.1

Several features distinguish POCUS from comprehensive ultrasound examinations. First, POCUS is designed to answer focused questions, whereas comprehensive ultrasound examinations evaluate all organs in an anatomical region; for example, an abdominal POCUS exam may evaluate only for presence or absence of intraperitoneal free fluid, whereas a comprehensive examination of the right upper quadrant will evaluate the liver, gallbladder, and biliary ducts. Second, POCUS examinations are generally performed by the same clinician who generates the relevant clinical question to answer with POCUS and ultimately integrates the findings into the patient’s care.2 By contrast, comprehensive ultrasound examinations involve multiple providers and steps: a clinician generates a relevant clinical question and requests an ultrasound examination that is acquired by a sonographer, interpreted by a radiologist, and reported back to the requesting clinician. Third, POCUS is often used to evaluate multiple body systems. For example, to evaluate a patient with undifferentiated hypotension, a multisystem POCUS examination of the heart, inferior vena cava, lungs, abdomen, and lower extremity veins is typically performed. Finally, POCUS examinations can be performed serially to investigate changes in clinical status or evaluate response to therapy, such as monitoring the heart, lungs, and inferior vena cava during fluid resuscitation.

The purpose of this position statement is to inform a broad audience about how hospitalists are using diagnostic and procedural applications of POCUS. This position statement does not mandate that hospitalists use POCUS. Rather, it is intended to provide guidance on the safe and effective use of POCUS by the hospitalists who use it and the administrators who oversee its use. We discuss POCUS (1) applications, (2) training, (3) assessments, and (4) program management. This position statement was reviewed and approved by the Society of Hospital Medicine (SHM) Executive Committee in March 2018.

 

 

APPLICATIONS

Common diagnostic and procedural applications of POCUS used by hospitalists are listed in Table 1. Selected evidence supporting the use of these applications is described in the supplementary online content (Appendices 1–8 available at http://journalofhospitalmedicine.com) and SHM position statements on specific ultrasound-guided bedside procedures.3,4 Additional applications not listed in Table 1 that may be performed by some hospitalists include assessment of the eyes, stomach, bowels, ovaries, pregnancy, and testicles, as well as performance of regional anesthesia. Moreover, hospitalists caring for pediatric and adolescent patients may use additional applications besides those listed here. Currently, many hospitalists already perform more complex and sophisticated POCUS examinations than those listed in Table 1. The scope of POCUS use by hospitalists continues to expand, and this position statement should not restrict that expansion.

As outlined in our earlier position statements,3,4 ultrasound guidance lowers complication rates and increases success rates of invasive bedside procedures. Diagnostic POCUS can guide clinical decision making prior to bedside procedures. For instance, hospitalists may use POCUS to assess the size and character of a pleural effusion to help determine the most appropriate management strategy: observation, medical treatment, thoracentesis, chest tube placement, or surgical therapy. Furthermore, diagnostic POCUS can be used to rapidly assess for immediate postprocedural complications, such as pneumothorax, or if the patient develops new symptoms.

TRAINING

Basic Knowledge

Basic knowledge includes fundamentals of ultrasound physics; safety;4 anatomy; physiology; and device operation, including maintenance and cleaning. Basic knowledge can be taught by multiple methods, including live or recorded lectures, online modules, or directed readings.

Image Acquisition

Training should occur across multiple types of patients (eg, obese, cachectic, postsurgical) and clinical settings (eg, intensive care unit, general medicine wards, emergency department) when available. Training is largely hands-on because the relevant skills involve integration of 3D anatomy with spatial manipulation, hand-eye coordination, and fine motor movements. Virtual reality ultrasound simulators may accelerate mastery, particularly for cardiac image acquisition, and expose learners to standardized sets of pathologic findings. Real-time bedside feedback on image acquisition is ideal because understanding how ultrasound probe manipulation affects the images acquired is essential to learning.

Image Interpretation

Training in image interpretation relies on visual pattern recognition of normal and abnormal findings. Therefore, the normal to abnormal spectrum should be broad, and learners should maintain a log of what abnormalities have been identified. Giving real-time feedback at the bedside is ideal because of the connection between image acquisition and interpretation. Image interpretation can be taught through didactic sessions, image review sessions, or review of teaching files with annotated images.

Clinical Integration

Learners must interpret and integrate image findings with other clinical data considering the image quality, patient characteristics, and changing physiology. Clinical integration should be taught by instructors that share similar clinical knowledge as learners. Although sonographers are well suited to teach image acquisition, they should not be the sole instructors to teach hospitalists how to integrate ultrasound findings in clinical decision making. Likewise, emphasis should be placed on the appropriate use of POCUS within a provider’s skill set. Learners must appreciate the clinical significance of POCUS findings, including recognition of incidental findings that may require further workup. Supplemental training in clinical integration can occur through didactics that include complex patient scenarios.

 

 

Pathways

Clinical competency can be achieved with training adherent to five criteria. First, the training environment should be similar to where the trainee will practice. Second, training and feedback should occur in real time. Third, specific applications should be taught rather than broad training in “hospitalist POCUS.” Each application requires unique skills and knowledge, including image acquisition pitfalls and artifacts. Fourth, clinical competence must be achieved and demonstrated; it is not necessarily gained through experience. Fifth, once competency is achieved, continued education and feedback are necessary to ensure it is maintained.

Residency-based POCUS training pathways can best fulfill these criteria. They may eventually become commonplace, but until then alternative pathways must exist for hospitalist providers who are already in practice. There are three important attributes of such pathways. First, administrators’ expectations about learners’ clinical productivity must be realistically, but only temporarily, relaxed; otherwise, competing demands on time will likely overwhelm learners and subvert training. Second, training should begin through a local or national hands-on training program. The SHM POCUS certificate program consolidates training for common diagnostic POCUS applications for hospitalists.6 Other medical societies offer training for their respective clinical specialties.7 Third, once basic POCUS training has begun, longitudinal training should continue ideally with a local hospitalist POCUS expert.

In some settings, a subgroup of hospitalists may not desire, or be able to achieve, competency in the manual skills of POCUS image acquisition. Nevertheless, hospitalists may still find value in understanding POCUS nomenclature, image pattern recognition, and the evidence and pitfalls behind clinical integration of specific POCUS findings. This subset of POCUS skills allows hospitalists to communicate effectively with and understand the clinical decisions made by their colleagues who are competent in POCUS use.

The minimal skills a hospitalist should possess to serve as a POCUS trainer include proficiency of basic knowledge, image acquisition, image interpretation, and clinical integration of the POCUS applications being taught; effectiveness as a hands-on instructor to teach image acquisition skills; and an in-depth understanding of common POCUS pitfalls and limitations.

ASSESSMENTS

Assessment methods for POCUS can include the following: knowledge-based questions, image acquisition using task-specific checklists on human or simulation models, image interpretation using a series of videos or still images with normal and abnormal findings, clinical integration using “next best step” in a multiple choice format with POCUS images, and simulation-based clinical scenarios. Assessment methods should be aligned with local availability of resources and trainers.

Basic Knowledge

Basic knowledge can be assessed via multiple choice questions assessing knowledge of ultrasound physics, image optimization, relevant anatomy, and limitations of POCUS imaging. Basic knowledge lies primarily in the cognitive domain and does not assess manual skills.

Image Acquisition

Image acquisition can be assessed by observation and rating of image quality. Where resources allow, assessment of image acquisition is likely best done through a combination of developing an image portfolio with a minimum number of high quality images, plus direct observation of image acquisition by an expert. Various programs have utilized minimum numbers of images acquired to help define competence with image acquisition skills.6–8 Although minimums may be a necessary step to gain competence, using them as a sole means to determine competence does not account for variable learning curves.9 As with other manual skills in hospital medicine, such as ultrasound-guided bedside procedures, minimum numbers are best used as a starting point for assessments.3,10 In this regard, portfolio development with meticulous attention to the gain, depth, and proper tomographic plane of images can monitor a hospitalist’s progress toward competence by providing objective assessments and feedback. Simulation may also be used as it allows assessment of image acquisition skills and an opportunity to provide real-time feedback, similar to direct observation but without actual patients.

 

 

Image Interpretation

Image interpretation is best assessed by an expert observing the learner at bedside; however, when bedside assessment is not possible, image interpretation skills may be assessed using multiple choice or free text interpretation of archived ultrasound images with normal and abnormal findings. This is often incorporated into the portfolio development portion of a training program, as learners can submit their image interpretation along with the video clip. Both normal and abnormal images can be used to assess anatomic recognition and interpretation. Emphasis should be placed on determining when an image is suboptimal for diagnosis (eg, incomplete exam or poor-quality images). Quality assurance programs should incorporate structured feedback sessions.

Clinical Integration

Assessment of clinical integration can be completed through case scenarios that assess knowledge, interpretation of images, and integration of findings into clinical decision making, which is often delivered via a computer-based assessment. Assessments should combine specific POCUS applications to evaluate common clinical problems in hospital medicine, such as undifferentiated hypotension and dyspnea. High-fidelity simulators can be used to blend clinical case scenarios with image acquisition, image interpretation, and clinical integration. When feasible, comprehensive feedback on how providers acquire, interpret, and apply ultrasound at the bedside is likely the best mechanism to assess clinical integration. This process can be done with a hospitalist’s own patients.

General Assessment

A general assessment that includes a summative knowledge and hands-on skills assessment using task-specific checklists can be performed upon completion of training. A high-fidelity simulator with dynamic or virtual anatomy can provide reproducible standardized assessments with variation in the type and difficulty of cases. When available, we encourage the use of dynamic assessments on actual patients that have both normal and abnormal ultrasound findings because simulated patient scenarios have limitations, even with the use of high-fidelity simulators. Programs are recommended to use formative and summative assessments for evaluation. Quantitative scoring systems using checklists are likely the best framework.11,12

CERTIFICATES AND CERTIFICATION

A certificate of completion is proof of a provider’s participation in an educational activity; it does not equate with competency, though it may be a step toward it. Most POCUS training workshops and short courses provide certificates of completion. Certification of competency is an attestation of a hospitalist’s basic competence within a defined scope of practice (Table 2).13 However, without longitudinal supervision and feedback, skills can decay; therefore, we recommend a longitudinal training program that provides mentored feedback and incorporates periodic competency assessments. At present, no national board certification in POCUS is available to grant external certification of competency for hospitalists.

External Certificate

Certificates of completion can be external through a national organization. An external certificate of completion designed for hospitalists includes the POCUS Certificate of Completion offered by SHM in collaboration with CHEST.6 This certificate program provides regional training options and longitudinal portfolio development. Other external certificates are also available to hospitalists.7,14,15

Most hospitalists are boarded by the American Board of Internal Medicine or the American Board of Family Medicine. These boards do not yet include certification of competency in POCUS. Other specialty boards, such as emergency medicine, include competency in POCUS. For emergency medicine, completion of an accredited residency training program and certification by the national board includes POCUS competency.

 

 

Internal Certificate

There are a few examples of successful local institutional programs that have provided internal certificates of competency.12,14 Competency assessments require significant resources including investment by both faculty and learners. Ongoing evaluation of competency should be based on quality assurance processes.

Credentialing and Privileging

The American Medical Association (AMA) House of Delegates in 1999 passed a resolution (AMA HR. 802) recommending hospitals follow specialty-specific guidelines for privileging decisions related to POCUS use.17 The resolution included a statement that, “ultrasound imaging is within the scope of practice of appropriately trained physicians.”

Some institutions have begun to rely on a combination of internal and external certificate programs to grant privileges to hospitalists.10 Although specific privileges for POCUS may not be required in some hospitals, some institutions may require certification of training and assessments prior to granting permission to use POCUS.

Hospitalist programs are encouraged to evaluate ongoing POCUS use by their providers after granting initial permission. If privileging is instituted by a hospital, hospitalists must play a significant role in determining the requirements for privileging and ongoing maintenance of skills.

Maintenance of Skills

All medical skills can decay with disuse, including those associated with POCUS.12,18 Thus, POCUS users should continue using POCUS regularly in clinical practice and participate in POCUS continuing medical education activities, ideally with ongoing assessments. Maintenance of skills may be confirmed through routine participation in a quality assurance program.

PROGRAM MANAGEMENT

Use of POCUS in hospital medicine has unique considerations, and hospitalists should be integrally involved in decision making surrounding institutional POCUS program management. Appointing a dedicated POCUS director can help a program succeed.8

Equipment and Image Archiving

Several factors are important to consider when selecting an ultrasound machine: portability, screen size, and ease of use; integration with the electronic medical record and options for image archiving; manufacturer’s service plan, including technical and clinical support; and compliance with local infection control policies. The ability to easily archive and retrieve images is essential for quality assurance, continuing education, institutional quality improvement, documentation, and reimbursement. In certain scenarios, image archiving may not be possible (such as with personal handheld devices or in emergency situations) or necessary (such as with frequent serial examinations during fluid resuscitation). An image archive is ideally linked to reports, orders, and billing software.10,19 If such linkages are not feasible, parallel external storage that complies with regulatory standards (ie, HIPAA compliance) may be suitable.20

Documentation and Billing

Components of documentation include the indication and type of ultrasound examination performed, date and time of the examination, patient identifying information, name of provider(s) acquiring and interpreting the images, specific scanning protocols used, patient position, probe used, and findings. Documentation can occur through a standalone note or as part of another note, such as a progress note. Whenever possible, documentation should be timely to facilitate communication with other providers.

Billing is supported through the AMA Current Procedural Terminology codes for “focused” or “limited” ultrasound examinations (Appendix 9). The following three criteria must be satisfied for billing. First, images must be permanently stored. Specific requirements vary by insurance policy, though current practice suggests a minimum of one image demonstrating relevant anatomy and pathology for the ultrasound examination coded. For ultrasound-guided procedures that require needle insertion, images should be captured at the point of interest, and a procedure note should reflect that the needle was guided and visualized under ultrasound.21 Second, proper documentation must be entered in the medical record. Third, local institutional privileges for POCUS must be considered. Although privileges are not required to bill, some hospitals or payers may require them.

 

 

Quality Assurance

Published guidelines on quality assurance in POCUS are available from different specialty organizations, including emergency medicine, pediatric emergency medicine, critical care, anesthesiology, obstetrics, and cardiology.8,22–28 Quality assurance is aimed at ensuring that physicians maintain basic competency in using POCUS to influence bedside decisions.

Quality assurance should be carried out by an individual or committee with expertise in POCUS. Multidisciplinary QA programs in which hospital medicine providers are working collaboratively with other POCUS providers have been demonstrated to be highly effective.10 Oversight includes ensuring that providers using POCUS are appropriately trained,10,22,28 using the equipment correctly,8,26,28 and documenting properly. Some programs have implemented mechanisms to review and provide feedback on image acquisition, interpretation, and clinical integration.8,10 Other programs have compared POCUS findings with referral studies, such as comprehensive ultrasound examinations.

CONCLUSIONS

Practicing hospitalists must continue to collaborate with their institutions to build POCUS capabilities. In particular, they must work with their local privileging body to determine what credentials are required. The distinction between certificates of completion and certificates of competency, including whether those certificates are internal or external, is important in the credentialing process.

External certificates of competency are currently unavailable for most practicing hospitalists because ABIM certification does not include POCUS-related competencies. As internal medicine residency training programs begin to adopt POCUS training and certification into their educational curricula, we foresee a need to update the ABIM Policies and Procedures for Certification. Until then, we recommend that certificates of competency be defined and granted internally by local hospitalist groups.

Given the many advantages of POCUS over traditional tools, we anticipate its increasing implementation among hospitalists in the future. As with all medical technology, its role in clinical care should be continuously reexamined and redefined through health services research. Such information will be useful in developing practice guidelines, educational curricula, and training standards.

Acknowledgments

The authors would like to thank all members that participated in the discussion and finalization of this position statement during the Point-of-care Ultrasound Faculty Retreat at the 2018 Society of Hospital Medicine Annual Conference: Saaid Abdel-Ghani, Brandon Boesch, Joel Cho, Ria Dancel, Renee Dversdal, Ricardo Franco-Sadud, Benjamin Galen, Trevor P. Jensen, Mohit Jindal, Gordon Johnson, Linda M. Kurian, Gigi Liu, Charles M. LoPresti, Brian P. Lucas, Venkat Kalidindi, Benji Matthews, Anna Maw, Gregory Mints, Kreegan Reierson, Gerard Salame, Richard Schildhouse, Daniel Schnobrich, Nilam Soni, Kirk Spencer, Hiromizu Takahashi, David M. Tierney, Tanping Wong, and Toru Yamada.

Files
References

1. Schnobrich DJ, Mathews BK, Trappey BE, Muthyala BK, Olson APJ. Entrusting internal medicine residents to use point of care ultrasound: Towards improved assessment and supervision. Med Teach. 2018:1-6. doi:10.1080/0142159X.2018.1457210.
2. Soni NJ, Lucas BP. Diagnostic point-of-care ultrasound for hospitalists. J Hosp Med. 2015;10(2):120-124. doi:10.1002/jhm.2285.
3. Lucas BP, Tierney DM, Jensen TP, et al. Credentialing of hospitalists in ultrasound-guided bedside procedures: a position statement of the society of hospital medicine. J Hosp Med. 2018;13(2):117-125. doi:10.12788/jhm.2917.
4. Dancel R, Schnobrich D, Puri N, et al. Recommendations on the use of ultrasound guidance for adult thoracentesis: a position statement of the society of hospital medicine. J Hosp Med. 2018;13(2):126-135. doi:10.12788/jhm.2940.
5. National Council on Radiation Protection and Measurements, The Council. Implementation of the Principle of as Low as Reasonably Achievable (ALARA) for Medical and Dental Personnel.; 1990.
6. Society of Hospital Medicine. Point of Care Ultrasound course: https://www.hospitalmedicine.org/clinical-topics/ultrasonography-cert/. Accessed February 6, 2018.
7. Critical Care Ultrasonography Certificate of Completion Program. CHEST. American College of Chest Physicians. http://www.chestnet.org/Education/Advanced-Clinical-Training/Certificate-of-Completion-Program/Critical-Care-Ultrasonography. Accessed February 6, 2018.
8. American College of Emergency Physicians Policy Statement: Emergency Ultrasound Guidelines. 2016. https://www.acep.org/Clinical---Practice-Management/ACEP-Ultrasound-Guidelines/. Accessed February 6, 2018.
9. Blehar DJ, Barton B, Gaspari RJ. Learning curves in emergency ultrasound education. Acad Emerg Med. 2015;22(5):574-582. doi:10.1111/acem.12653.
10. Mathews BK, Zwank M. Hospital medicine point of care ultrasound credentialing: an example protocol. J Hosp Med. 2017;12(9):767-772. doi:10.12788/jhm.2809.
11. Barsuk JH, McGaghie WC, Cohen ER, Balachandran JS, Wayne DB. Use of simulation-based mastery learning to improve the quality of central venous catheter placement in a medical intensive care unit. J Hosp Med. 2009;4(7):397-403. doi:10.1002/jhm.468.
12. Mathews BK, Reierson K, Vuong K, et al. The design and evaluation of the Comprehensive Hospitalist Assessment and Mentorship with Portfolios (CHAMP) ultrasound program. J Hosp Med. 2018;13(8):544-550. doi:10.12788/jhm.2938.
13. Soni NJ, Tierney DM, Jensen TP, Lucas BP. Certification of point-of-care ultrasound competency. J Hosp Med. 2017;12(9):775-776. doi:10.12788/jhm.2812.
14. Ultrasound Certification for Physicians. Alliance for Physician Certification and Advancement. APCA. https://apca.org/. Accessed February 6, 2018.
15. National Board of Echocardiography, Inc. https://www.echoboards.org/EchoBoards/News/2019_Adult_Critical_Care_Echocardiography_Exam.aspx. Accessed June 18, 2018.
16. Tierney DM. Internal Medicine Bedside Ultrasound Program (IMBUS). Abbott Northwestern. http://imbus.anwresidency.com/index.html. Accessed February 6, 2018.
17. American Medical Association House of Delegates Resolution H-230.960: Privileging for Ultrasound Imaging. Resolution 802. Policy Finder Website. http://search0.ama-assn.org/search/pfonline. Published 1999. Accessed February 18, 2018.
18. Kelm D, Ratelle J, Azeem N, et al. Longitudinal ultrasound curriculum improves long-term retention among internal medicine residents. J Grad Med Educ. 2015;7(3):454-457. doi:10.4300/JGME-14-00284.1.
19. Flannigan MJ, Adhikari S. Point-of-care ultrasound work flow innovation: impact on documentation and billing. J Ultrasound Med. 2017;36(12):2467-2474. doi:10.1002/jum.14284.
20. Emergency Ultrasound: Workflow White Paper. https://www.acep.org/uploadedFiles/ACEP/memberCenter/SectionsofMembership/ultra/Workflow%20White%20Paper.pdf. Published 2013. Accessed February 18, 2018.
21. Ultrasound Coding and Reimbursement Document 2009. Emergency Ultrasound Section. American College of Emergency Physicians. http://emergencyultrasoundteaching.com/assets/2009_coding_update.pdf. Published 2009. Accessed February 18, 2018.
22. Mayo PH, Beaulieu Y, Doelken P, et al. American College of Chest Physicians/La Societe de Reanimation de Langue Francaise statement on competence in critical care ultrasonography. Chest. 2009;135(4):1050-1060. doi:10.1378/chest.08-2305.
23. Frankel HL, Kirkpatrick AW, Elbarbary M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients-part I: general ultrasonography. Crit Care Med. 2015;43(11):2479-2502. doi:10.1097/ccm.0000000000001216.
24. Levitov A, Frankel HL, Blaivas M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients-part ii: cardiac ultrasonography. Crit Care Med. 2016;44(6):1206-1227. doi:10.1097/ccm.0000000000001847.
25. ACR–ACOG–AIUM–SRU Practice Parameter for the Performance of Obstetrical Ultrasound. https://www.acr.org/-/media/ACR/Files/Practice-Parameters/us-ob.pdf. Published 2013. Accessed February 18, 2018.
26. AIUM practice guideline for documentation of an ultrasound examination. J Ultrasound Med. 2014;33(6):1098-1102. doi:10.7863/ultra.33.6.1098.
27. Marin JR, Lewiss RE. Point-of-care ultrasonography by pediatric emergency medicine physicians. Pediatrics. 2015;135(4):e1113-e1122. doi:10.1542/peds.2015-0343.
28. Spencer KT, Kimura BJ, Korcarz CE, Pellikka PA, Rahko PS, Siegel RJ. Focused cardiac ultrasound: recommendations from the American Society of Echocardiography. J Am Soc Echocardiogr. 2013;26(6):567-581. doi:10.1016/j.echo.2013.04.001.

Article PDF
Author and Disclosure Information

1Division of General & Hospital Medicine, The University of Texas Health San Antonio, San Antonio, Texas; 2Section of Hospital Medicine, South Texas Veterans Health Care System, San Antonio, Texas; 3Divisions of General Internal Medicine and Hospital Pediatrics, University of Minnesota, Minneapolis, Minnesota; 4Department of Hospital Medicine, HealthPartners Medical Group, Regions Hospital, St. Paul, Minnesota; 5Department of Medical Education, Abbott Northwestern Hospital, Minneapolis, Minnesota; 6Division of Hospital Medicine, Department of Medicine, University of California San Francisco, San Francisco, California; 7Division of Hospital Medicine, Department of Medicine, University of North Carolina, Chapel Hill, North Carolina; 8Division of General Pediatrics and Adolescent Medicine, Department of Pediatrics, University of North Carolina, Chapel Hill, North Carolina; 9Department of Hospital Medicine, Kaiser Permanente San Francisco Medical Center, San Francisco, California; 10Division of Hospital Medicine, Oregon Health & Science University, Portland, Oregon; 11Division of Hospital Medicine, Weill Cornell Medicine, New York, New York; 12Division of General Internal Medicine, Mayo Clinic, Rochester, Minnesota; 13Division of Hospital Medicine, Zucker School of Medicine at Hofstra Northwell, New Hyde Park, New York; 14Hospitalist Program, Division of General Internal Medicine, Department of Medicine, Johns Hopkins School of Medicine, Baltimore, Maryland; 15Division of Hospital Medicine, University of California Davis, Davis, California; 16Division of Hospital Medicine, Alameda Health System-Highland Hospital, Oakland, California; 17Louis Stokes Cleveland Veterans Affairs Hospital, Cleveland, Ohio; 18Case Western Reserve University School of Medicine, Cleveland, Ohio; 19Division of Hospital Medicine, University of Miami, Miami, Florida; 20Division of Hospital Medicine, Legacy Healthcare System, Portland, Oregon; 21Division of Hospital Medicine, University of Colorado, Aurora, Colorado; 22Department of Medicine, University of Central Florida, Naples, Florida; 23White River Junction VA Medical Center, White River Junction, Vermont; 24Geisel School of Medicine at Dartmouth College, Hanover, New Hampshire.

Funding

Nilam Soni: Department of Veterans Affairs, Quality Enhancement Research Initiative (QUERI) Partnered Evaluation Initiative Grant (HX002263-01A1). Brian P Lucas: Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development and Dartmouth SYNERGY, National Institutes of Health, National Center for Translational Science (UL1TR001086)

Disclaimer

The contents of this publication do not represent the views of the US Department of Veterans Affairs or the United States Government.

Publications
Topics
Sections
Files
Files
Author and Disclosure Information

1Division of General & Hospital Medicine, The University of Texas Health San Antonio, San Antonio, Texas; 2Section of Hospital Medicine, South Texas Veterans Health Care System, San Antonio, Texas; 3Divisions of General Internal Medicine and Hospital Pediatrics, University of Minnesota, Minneapolis, Minnesota; 4Department of Hospital Medicine, HealthPartners Medical Group, Regions Hospital, St. Paul, Minnesota; 5Department of Medical Education, Abbott Northwestern Hospital, Minneapolis, Minnesota; 6Division of Hospital Medicine, Department of Medicine, University of California San Francisco, San Francisco, California; 7Division of Hospital Medicine, Department of Medicine, University of North Carolina, Chapel Hill, North Carolina; 8Division of General Pediatrics and Adolescent Medicine, Department of Pediatrics, University of North Carolina, Chapel Hill, North Carolina; 9Department of Hospital Medicine, Kaiser Permanente San Francisco Medical Center, San Francisco, California; 10Division of Hospital Medicine, Oregon Health & Science University, Portland, Oregon; 11Division of Hospital Medicine, Weill Cornell Medicine, New York, New York; 12Division of General Internal Medicine, Mayo Clinic, Rochester, Minnesota; 13Division of Hospital Medicine, Zucker School of Medicine at Hofstra Northwell, New Hyde Park, New York; 14Hospitalist Program, Division of General Internal Medicine, Department of Medicine, Johns Hopkins School of Medicine, Baltimore, Maryland; 15Division of Hospital Medicine, University of California Davis, Davis, California; 16Division of Hospital Medicine, Alameda Health System-Highland Hospital, Oakland, California; 17Louis Stokes Cleveland Veterans Affairs Hospital, Cleveland, Ohio; 18Case Western Reserve University School of Medicine, Cleveland, Ohio; 19Division of Hospital Medicine, University of Miami, Miami, Florida; 20Division of Hospital Medicine, Legacy Healthcare System, Portland, Oregon; 21Division of Hospital Medicine, University of Colorado, Aurora, Colorado; 22Department of Medicine, University of Central Florida, Naples, Florida; 23White River Junction VA Medical Center, White River Junction, Vermont; 24Geisel School of Medicine at Dartmouth College, Hanover, New Hampshire.

Funding

Nilam Soni: Department of Veterans Affairs, Quality Enhancement Research Initiative (QUERI) Partnered Evaluation Initiative Grant (HX002263-01A1). Brian P Lucas: Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development and Dartmouth SYNERGY, National Institutes of Health, National Center for Translational Science (UL1TR001086)

Disclaimer

The contents of this publication do not represent the views of the US Department of Veterans Affairs or the United States Government.

Author and Disclosure Information

1Division of General & Hospital Medicine, The University of Texas Health San Antonio, San Antonio, Texas; 2Section of Hospital Medicine, South Texas Veterans Health Care System, San Antonio, Texas; 3Divisions of General Internal Medicine and Hospital Pediatrics, University of Minnesota, Minneapolis, Minnesota; 4Department of Hospital Medicine, HealthPartners Medical Group, Regions Hospital, St. Paul, Minnesota; 5Department of Medical Education, Abbott Northwestern Hospital, Minneapolis, Minnesota; 6Division of Hospital Medicine, Department of Medicine, University of California San Francisco, San Francisco, California; 7Division of Hospital Medicine, Department of Medicine, University of North Carolina, Chapel Hill, North Carolina; 8Division of General Pediatrics and Adolescent Medicine, Department of Pediatrics, University of North Carolina, Chapel Hill, North Carolina; 9Department of Hospital Medicine, Kaiser Permanente San Francisco Medical Center, San Francisco, California; 10Division of Hospital Medicine, Oregon Health & Science University, Portland, Oregon; 11Division of Hospital Medicine, Weill Cornell Medicine, New York, New York; 12Division of General Internal Medicine, Mayo Clinic, Rochester, Minnesota; 13Division of Hospital Medicine, Zucker School of Medicine at Hofstra Northwell, New Hyde Park, New York; 14Hospitalist Program, Division of General Internal Medicine, Department of Medicine, Johns Hopkins School of Medicine, Baltimore, Maryland; 15Division of Hospital Medicine, University of California Davis, Davis, California; 16Division of Hospital Medicine, Alameda Health System-Highland Hospital, Oakland, California; 17Louis Stokes Cleveland Veterans Affairs Hospital, Cleveland, Ohio; 18Case Western Reserve University School of Medicine, Cleveland, Ohio; 19Division of Hospital Medicine, University of Miami, Miami, Florida; 20Division of Hospital Medicine, Legacy Healthcare System, Portland, Oregon; 21Division of Hospital Medicine, University of Colorado, Aurora, Colorado; 22Department of Medicine, University of Central Florida, Naples, Florida; 23White River Junction VA Medical Center, White River Junction, Vermont; 24Geisel School of Medicine at Dartmouth College, Hanover, New Hampshire.

Funding

Nilam Soni: Department of Veterans Affairs, Quality Enhancement Research Initiative (QUERI) Partnered Evaluation Initiative Grant (HX002263-01A1). Brian P Lucas: Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development and Dartmouth SYNERGY, National Institutes of Health, National Center for Translational Science (UL1TR001086)

Disclaimer

The contents of this publication do not represent the views of the US Department of Veterans Affairs or the United States Government.

Article PDF
Article PDF
Related Articles

Many hospitalists incorporate point-of-care ultrasound (POCUS) into their daily practice because it adds value to their bedside evaluation of patients. However, standards for training and assessing hospitalists in POCUS have not yet been established. Other acute care specialties, including emergency medicine and critical care medicine, have already incorporated POCUS into their graduate medical education training programs, but most internal medicine residency programs are only beginning to provide POCUS training.1

Several features distinguish POCUS from comprehensive ultrasound examinations. First, POCUS is designed to answer focused questions, whereas comprehensive ultrasound examinations evaluate all organs in an anatomical region; for example, an abdominal POCUS exam may evaluate only for presence or absence of intraperitoneal free fluid, whereas a comprehensive examination of the right upper quadrant will evaluate the liver, gallbladder, and biliary ducts. Second, POCUS examinations are generally performed by the same clinician who generates the relevant clinical question to answer with POCUS and ultimately integrates the findings into the patient’s care.2 By contrast, comprehensive ultrasound examinations involve multiple providers and steps: a clinician generates a relevant clinical question and requests an ultrasound examination that is acquired by a sonographer, interpreted by a radiologist, and reported back to the requesting clinician. Third, POCUS is often used to evaluate multiple body systems. For example, to evaluate a patient with undifferentiated hypotension, a multisystem POCUS examination of the heart, inferior vena cava, lungs, abdomen, and lower extremity veins is typically performed. Finally, POCUS examinations can be performed serially to investigate changes in clinical status or evaluate response to therapy, such as monitoring the heart, lungs, and inferior vena cava during fluid resuscitation.

The purpose of this position statement is to inform a broad audience about how hospitalists are using diagnostic and procedural applications of POCUS. This position statement does not mandate that hospitalists use POCUS. Rather, it is intended to provide guidance on the safe and effective use of POCUS by the hospitalists who use it and the administrators who oversee its use. We discuss POCUS (1) applications, (2) training, (3) assessments, and (4) program management. This position statement was reviewed and approved by the Society of Hospital Medicine (SHM) Executive Committee in March 2018.

 

 

APPLICATIONS

Common diagnostic and procedural applications of POCUS used by hospitalists are listed in Table 1. Selected evidence supporting the use of these applications is described in the supplementary online content (Appendices 1–8 available at http://journalofhospitalmedicine.com) and SHM position statements on specific ultrasound-guided bedside procedures.3,4 Additional applications not listed in Table 1 that may be performed by some hospitalists include assessment of the eyes, stomach, bowels, ovaries, pregnancy, and testicles, as well as performance of regional anesthesia. Moreover, hospitalists caring for pediatric and adolescent patients may use additional applications besides those listed here. Currently, many hospitalists already perform more complex and sophisticated POCUS examinations than those listed in Table 1. The scope of POCUS use by hospitalists continues to expand, and this position statement should not restrict that expansion.

As outlined in our earlier position statements,3,4 ultrasound guidance lowers complication rates and increases success rates of invasive bedside procedures. Diagnostic POCUS can guide clinical decision making prior to bedside procedures. For instance, hospitalists may use POCUS to assess the size and character of a pleural effusion to help determine the most appropriate management strategy: observation, medical treatment, thoracentesis, chest tube placement, or surgical therapy. Furthermore, diagnostic POCUS can be used to rapidly assess for immediate postprocedural complications, such as pneumothorax, or if the patient develops new symptoms.

TRAINING

Basic Knowledge

Basic knowledge includes fundamentals of ultrasound physics; safety;4 anatomy; physiology; and device operation, including maintenance and cleaning. Basic knowledge can be taught by multiple methods, including live or recorded lectures, online modules, or directed readings.

Image Acquisition

Training should occur across multiple types of patients (eg, obese, cachectic, postsurgical) and clinical settings (eg, intensive care unit, general medicine wards, emergency department) when available. Training is largely hands-on because the relevant skills involve integration of 3D anatomy with spatial manipulation, hand-eye coordination, and fine motor movements. Virtual reality ultrasound simulators may accelerate mastery, particularly for cardiac image acquisition, and expose learners to standardized sets of pathologic findings. Real-time bedside feedback on image acquisition is ideal because understanding how ultrasound probe manipulation affects the images acquired is essential to learning.

Image Interpretation

Training in image interpretation relies on visual pattern recognition of normal and abnormal findings. Therefore, the normal to abnormal spectrum should be broad, and learners should maintain a log of what abnormalities have been identified. Giving real-time feedback at the bedside is ideal because of the connection between image acquisition and interpretation. Image interpretation can be taught through didactic sessions, image review sessions, or review of teaching files with annotated images.

Clinical Integration

Learners must interpret and integrate image findings with other clinical data considering the image quality, patient characteristics, and changing physiology. Clinical integration should be taught by instructors that share similar clinical knowledge as learners. Although sonographers are well suited to teach image acquisition, they should not be the sole instructors to teach hospitalists how to integrate ultrasound findings in clinical decision making. Likewise, emphasis should be placed on the appropriate use of POCUS within a provider’s skill set. Learners must appreciate the clinical significance of POCUS findings, including recognition of incidental findings that may require further workup. Supplemental training in clinical integration can occur through didactics that include complex patient scenarios.

 

 

Pathways

Clinical competency can be achieved with training adherent to five criteria. First, the training environment should be similar to where the trainee will practice. Second, training and feedback should occur in real time. Third, specific applications should be taught rather than broad training in “hospitalist POCUS.” Each application requires unique skills and knowledge, including image acquisition pitfalls and artifacts. Fourth, clinical competence must be achieved and demonstrated; it is not necessarily gained through experience. Fifth, once competency is achieved, continued education and feedback are necessary to ensure it is maintained.

Residency-based POCUS training pathways can best fulfill these criteria. They may eventually become commonplace, but until then alternative pathways must exist for hospitalist providers who are already in practice. There are three important attributes of such pathways. First, administrators’ expectations about learners’ clinical productivity must be realistically, but only temporarily, relaxed; otherwise, competing demands on time will likely overwhelm learners and subvert training. Second, training should begin through a local or national hands-on training program. The SHM POCUS certificate program consolidates training for common diagnostic POCUS applications for hospitalists.6 Other medical societies offer training for their respective clinical specialties.7 Third, once basic POCUS training has begun, longitudinal training should continue ideally with a local hospitalist POCUS expert.

In some settings, a subgroup of hospitalists may not desire, or be able to achieve, competency in the manual skills of POCUS image acquisition. Nevertheless, hospitalists may still find value in understanding POCUS nomenclature, image pattern recognition, and the evidence and pitfalls behind clinical integration of specific POCUS findings. This subset of POCUS skills allows hospitalists to communicate effectively with and understand the clinical decisions made by their colleagues who are competent in POCUS use.

The minimal skills a hospitalist should possess to serve as a POCUS trainer include proficiency of basic knowledge, image acquisition, image interpretation, and clinical integration of the POCUS applications being taught; effectiveness as a hands-on instructor to teach image acquisition skills; and an in-depth understanding of common POCUS pitfalls and limitations.

ASSESSMENTS

Assessment methods for POCUS can include the following: knowledge-based questions, image acquisition using task-specific checklists on human or simulation models, image interpretation using a series of videos or still images with normal and abnormal findings, clinical integration using “next best step” in a multiple choice format with POCUS images, and simulation-based clinical scenarios. Assessment methods should be aligned with local availability of resources and trainers.

Basic Knowledge

Basic knowledge can be assessed via multiple choice questions assessing knowledge of ultrasound physics, image optimization, relevant anatomy, and limitations of POCUS imaging. Basic knowledge lies primarily in the cognitive domain and does not assess manual skills.

Image Acquisition

Image acquisition can be assessed by observation and rating of image quality. Where resources allow, assessment of image acquisition is likely best done through a combination of developing an image portfolio with a minimum number of high quality images, plus direct observation of image acquisition by an expert. Various programs have utilized minimum numbers of images acquired to help define competence with image acquisition skills.6–8 Although minimums may be a necessary step to gain competence, using them as a sole means to determine competence does not account for variable learning curves.9 As with other manual skills in hospital medicine, such as ultrasound-guided bedside procedures, minimum numbers are best used as a starting point for assessments.3,10 In this regard, portfolio development with meticulous attention to the gain, depth, and proper tomographic plane of images can monitor a hospitalist’s progress toward competence by providing objective assessments and feedback. Simulation may also be used as it allows assessment of image acquisition skills and an opportunity to provide real-time feedback, similar to direct observation but without actual patients.

 

 

Image Interpretation

Image interpretation is best assessed by an expert observing the learner at bedside; however, when bedside assessment is not possible, image interpretation skills may be assessed using multiple choice or free text interpretation of archived ultrasound images with normal and abnormal findings. This is often incorporated into the portfolio development portion of a training program, as learners can submit their image interpretation along with the video clip. Both normal and abnormal images can be used to assess anatomic recognition and interpretation. Emphasis should be placed on determining when an image is suboptimal for diagnosis (eg, incomplete exam or poor-quality images). Quality assurance programs should incorporate structured feedback sessions.

Clinical Integration

Assessment of clinical integration can be completed through case scenarios that assess knowledge, interpretation of images, and integration of findings into clinical decision making, which is often delivered via a computer-based assessment. Assessments should combine specific POCUS applications to evaluate common clinical problems in hospital medicine, such as undifferentiated hypotension and dyspnea. High-fidelity simulators can be used to blend clinical case scenarios with image acquisition, image interpretation, and clinical integration. When feasible, comprehensive feedback on how providers acquire, interpret, and apply ultrasound at the bedside is likely the best mechanism to assess clinical integration. This process can be done with a hospitalist’s own patients.

General Assessment

A general assessment that includes a summative knowledge and hands-on skills assessment using task-specific checklists can be performed upon completion of training. A high-fidelity simulator with dynamic or virtual anatomy can provide reproducible standardized assessments with variation in the type and difficulty of cases. When available, we encourage the use of dynamic assessments on actual patients that have both normal and abnormal ultrasound findings because simulated patient scenarios have limitations, even with the use of high-fidelity simulators. Programs are recommended to use formative and summative assessments for evaluation. Quantitative scoring systems using checklists are likely the best framework.11,12

CERTIFICATES AND CERTIFICATION

A certificate of completion is proof of a provider’s participation in an educational activity; it does not equate with competency, though it may be a step toward it. Most POCUS training workshops and short courses provide certificates of completion. Certification of competency is an attestation of a hospitalist’s basic competence within a defined scope of practice (Table 2).13 However, without longitudinal supervision and feedback, skills can decay; therefore, we recommend a longitudinal training program that provides mentored feedback and incorporates periodic competency assessments. At present, no national board certification in POCUS is available to grant external certification of competency for hospitalists.

External Certificate

Certificates of completion can be external through a national organization. An external certificate of completion designed for hospitalists includes the POCUS Certificate of Completion offered by SHM in collaboration with CHEST.6 This certificate program provides regional training options and longitudinal portfolio development. Other external certificates are also available to hospitalists.7,14,15

Most hospitalists are boarded by the American Board of Internal Medicine or the American Board of Family Medicine. These boards do not yet include certification of competency in POCUS. Other specialty boards, such as emergency medicine, include competency in POCUS. For emergency medicine, completion of an accredited residency training program and certification by the national board includes POCUS competency.

 

 

Internal Certificate

There are a few examples of successful local institutional programs that have provided internal certificates of competency.12,14 Competency assessments require significant resources including investment by both faculty and learners. Ongoing evaluation of competency should be based on quality assurance processes.

Credentialing and Privileging

The American Medical Association (AMA) House of Delegates in 1999 passed a resolution (AMA HR. 802) recommending hospitals follow specialty-specific guidelines for privileging decisions related to POCUS use.17 The resolution included a statement that, “ultrasound imaging is within the scope of practice of appropriately trained physicians.”

Some institutions have begun to rely on a combination of internal and external certificate programs to grant privileges to hospitalists.10 Although specific privileges for POCUS may not be required in some hospitals, some institutions may require certification of training and assessments prior to granting permission to use POCUS.

Hospitalist programs are encouraged to evaluate ongoing POCUS use by their providers after granting initial permission. If privileging is instituted by a hospital, hospitalists must play a significant role in determining the requirements for privileging and ongoing maintenance of skills.

Maintenance of Skills

All medical skills can decay with disuse, including those associated with POCUS.12,18 Thus, POCUS users should continue using POCUS regularly in clinical practice and participate in POCUS continuing medical education activities, ideally with ongoing assessments. Maintenance of skills may be confirmed through routine participation in a quality assurance program.

PROGRAM MANAGEMENT

Use of POCUS in hospital medicine has unique considerations, and hospitalists should be integrally involved in decision making surrounding institutional POCUS program management. Appointing a dedicated POCUS director can help a program succeed.8

Equipment and Image Archiving

Several factors are important to consider when selecting an ultrasound machine: portability, screen size, and ease of use; integration with the electronic medical record and options for image archiving; manufacturer’s service plan, including technical and clinical support; and compliance with local infection control policies. The ability to easily archive and retrieve images is essential for quality assurance, continuing education, institutional quality improvement, documentation, and reimbursement. In certain scenarios, image archiving may not be possible (such as with personal handheld devices or in emergency situations) or necessary (such as with frequent serial examinations during fluid resuscitation). An image archive is ideally linked to reports, orders, and billing software.10,19 If such linkages are not feasible, parallel external storage that complies with regulatory standards (ie, HIPAA compliance) may be suitable.20

Documentation and Billing

Components of documentation include the indication and type of ultrasound examination performed, date and time of the examination, patient identifying information, name of provider(s) acquiring and interpreting the images, specific scanning protocols used, patient position, probe used, and findings. Documentation can occur through a standalone note or as part of another note, such as a progress note. Whenever possible, documentation should be timely to facilitate communication with other providers.

Billing is supported through the AMA Current Procedural Terminology codes for “focused” or “limited” ultrasound examinations (Appendix 9). The following three criteria must be satisfied for billing. First, images must be permanently stored. Specific requirements vary by insurance policy, though current practice suggests a minimum of one image demonstrating relevant anatomy and pathology for the ultrasound examination coded. For ultrasound-guided procedures that require needle insertion, images should be captured at the point of interest, and a procedure note should reflect that the needle was guided and visualized under ultrasound.21 Second, proper documentation must be entered in the medical record. Third, local institutional privileges for POCUS must be considered. Although privileges are not required to bill, some hospitals or payers may require them.

 

 

Quality Assurance

Published guidelines on quality assurance in POCUS are available from different specialty organizations, including emergency medicine, pediatric emergency medicine, critical care, anesthesiology, obstetrics, and cardiology.8,22–28 Quality assurance is aimed at ensuring that physicians maintain basic competency in using POCUS to influence bedside decisions.

Quality assurance should be carried out by an individual or committee with expertise in POCUS. Multidisciplinary QA programs in which hospital medicine providers are working collaboratively with other POCUS providers have been demonstrated to be highly effective.10 Oversight includes ensuring that providers using POCUS are appropriately trained,10,22,28 using the equipment correctly,8,26,28 and documenting properly. Some programs have implemented mechanisms to review and provide feedback on image acquisition, interpretation, and clinical integration.8,10 Other programs have compared POCUS findings with referral studies, such as comprehensive ultrasound examinations.

CONCLUSIONS

Practicing hospitalists must continue to collaborate with their institutions to build POCUS capabilities. In particular, they must work with their local privileging body to determine what credentials are required. The distinction between certificates of completion and certificates of competency, including whether those certificates are internal or external, is important in the credentialing process.

External certificates of competency are currently unavailable for most practicing hospitalists because ABIM certification does not include POCUS-related competencies. As internal medicine residency training programs begin to adopt POCUS training and certification into their educational curricula, we foresee a need to update the ABIM Policies and Procedures for Certification. Until then, we recommend that certificates of competency be defined and granted internally by local hospitalist groups.

Given the many advantages of POCUS over traditional tools, we anticipate its increasing implementation among hospitalists in the future. As with all medical technology, its role in clinical care should be continuously reexamined and redefined through health services research. Such information will be useful in developing practice guidelines, educational curricula, and training standards.

Acknowledgments

The authors would like to thank all members that participated in the discussion and finalization of this position statement during the Point-of-care Ultrasound Faculty Retreat at the 2018 Society of Hospital Medicine Annual Conference: Saaid Abdel-Ghani, Brandon Boesch, Joel Cho, Ria Dancel, Renee Dversdal, Ricardo Franco-Sadud, Benjamin Galen, Trevor P. Jensen, Mohit Jindal, Gordon Johnson, Linda M. Kurian, Gigi Liu, Charles M. LoPresti, Brian P. Lucas, Venkat Kalidindi, Benji Matthews, Anna Maw, Gregory Mints, Kreegan Reierson, Gerard Salame, Richard Schildhouse, Daniel Schnobrich, Nilam Soni, Kirk Spencer, Hiromizu Takahashi, David M. Tierney, Tanping Wong, and Toru Yamada.

Many hospitalists incorporate point-of-care ultrasound (POCUS) into their daily practice because it adds value to their bedside evaluation of patients. However, standards for training and assessing hospitalists in POCUS have not yet been established. Other acute care specialties, including emergency medicine and critical care medicine, have already incorporated POCUS into their graduate medical education training programs, but most internal medicine residency programs are only beginning to provide POCUS training.1

Several features distinguish POCUS from comprehensive ultrasound examinations. First, POCUS is designed to answer focused questions, whereas comprehensive ultrasound examinations evaluate all organs in an anatomical region; for example, an abdominal POCUS exam may evaluate only for presence or absence of intraperitoneal free fluid, whereas a comprehensive examination of the right upper quadrant will evaluate the liver, gallbladder, and biliary ducts. Second, POCUS examinations are generally performed by the same clinician who generates the relevant clinical question to answer with POCUS and ultimately integrates the findings into the patient’s care.2 By contrast, comprehensive ultrasound examinations involve multiple providers and steps: a clinician generates a relevant clinical question and requests an ultrasound examination that is acquired by a sonographer, interpreted by a radiologist, and reported back to the requesting clinician. Third, POCUS is often used to evaluate multiple body systems. For example, to evaluate a patient with undifferentiated hypotension, a multisystem POCUS examination of the heart, inferior vena cava, lungs, abdomen, and lower extremity veins is typically performed. Finally, POCUS examinations can be performed serially to investigate changes in clinical status or evaluate response to therapy, such as monitoring the heart, lungs, and inferior vena cava during fluid resuscitation.

The purpose of this position statement is to inform a broad audience about how hospitalists are using diagnostic and procedural applications of POCUS. This position statement does not mandate that hospitalists use POCUS. Rather, it is intended to provide guidance on the safe and effective use of POCUS by the hospitalists who use it and the administrators who oversee its use. We discuss POCUS (1) applications, (2) training, (3) assessments, and (4) program management. This position statement was reviewed and approved by the Society of Hospital Medicine (SHM) Executive Committee in March 2018.

 

 

APPLICATIONS

Common diagnostic and procedural applications of POCUS used by hospitalists are listed in Table 1. Selected evidence supporting the use of these applications is described in the supplementary online content (Appendices 1–8 available at http://journalofhospitalmedicine.com) and SHM position statements on specific ultrasound-guided bedside procedures.3,4 Additional applications not listed in Table 1 that may be performed by some hospitalists include assessment of the eyes, stomach, bowels, ovaries, pregnancy, and testicles, as well as performance of regional anesthesia. Moreover, hospitalists caring for pediatric and adolescent patients may use additional applications besides those listed here. Currently, many hospitalists already perform more complex and sophisticated POCUS examinations than those listed in Table 1. The scope of POCUS use by hospitalists continues to expand, and this position statement should not restrict that expansion.

As outlined in our earlier position statements,3,4 ultrasound guidance lowers complication rates and increases success rates of invasive bedside procedures. Diagnostic POCUS can guide clinical decision making prior to bedside procedures. For instance, hospitalists may use POCUS to assess the size and character of a pleural effusion to help determine the most appropriate management strategy: observation, medical treatment, thoracentesis, chest tube placement, or surgical therapy. Furthermore, diagnostic POCUS can be used to rapidly assess for immediate postprocedural complications, such as pneumothorax, or if the patient develops new symptoms.

TRAINING

Basic Knowledge

Basic knowledge includes fundamentals of ultrasound physics; safety;4 anatomy; physiology; and device operation, including maintenance and cleaning. Basic knowledge can be taught by multiple methods, including live or recorded lectures, online modules, or directed readings.

Image Acquisition

Training should occur across multiple types of patients (eg, obese, cachectic, postsurgical) and clinical settings (eg, intensive care unit, general medicine wards, emergency department) when available. Training is largely hands-on because the relevant skills involve integration of 3D anatomy with spatial manipulation, hand-eye coordination, and fine motor movements. Virtual reality ultrasound simulators may accelerate mastery, particularly for cardiac image acquisition, and expose learners to standardized sets of pathologic findings. Real-time bedside feedback on image acquisition is ideal because understanding how ultrasound probe manipulation affects the images acquired is essential to learning.

Image Interpretation

Training in image interpretation relies on visual pattern recognition of normal and abnormal findings. Therefore, the normal to abnormal spectrum should be broad, and learners should maintain a log of what abnormalities have been identified. Giving real-time feedback at the bedside is ideal because of the connection between image acquisition and interpretation. Image interpretation can be taught through didactic sessions, image review sessions, or review of teaching files with annotated images.

Clinical Integration

Learners must interpret and integrate image findings with other clinical data considering the image quality, patient characteristics, and changing physiology. Clinical integration should be taught by instructors that share similar clinical knowledge as learners. Although sonographers are well suited to teach image acquisition, they should not be the sole instructors to teach hospitalists how to integrate ultrasound findings in clinical decision making. Likewise, emphasis should be placed on the appropriate use of POCUS within a provider’s skill set. Learners must appreciate the clinical significance of POCUS findings, including recognition of incidental findings that may require further workup. Supplemental training in clinical integration can occur through didactics that include complex patient scenarios.

 

 

Pathways

Clinical competency can be achieved with training adherent to five criteria. First, the training environment should be similar to where the trainee will practice. Second, training and feedback should occur in real time. Third, specific applications should be taught rather than broad training in “hospitalist POCUS.” Each application requires unique skills and knowledge, including image acquisition pitfalls and artifacts. Fourth, clinical competence must be achieved and demonstrated; it is not necessarily gained through experience. Fifth, once competency is achieved, continued education and feedback are necessary to ensure it is maintained.

Residency-based POCUS training pathways can best fulfill these criteria. They may eventually become commonplace, but until then alternative pathways must exist for hospitalist providers who are already in practice. There are three important attributes of such pathways. First, administrators’ expectations about learners’ clinical productivity must be realistically, but only temporarily, relaxed; otherwise, competing demands on time will likely overwhelm learners and subvert training. Second, training should begin through a local or national hands-on training program. The SHM POCUS certificate program consolidates training for common diagnostic POCUS applications for hospitalists.6 Other medical societies offer training for their respective clinical specialties.7 Third, once basic POCUS training has begun, longitudinal training should continue ideally with a local hospitalist POCUS expert.

In some settings, a subgroup of hospitalists may not desire, or be able to achieve, competency in the manual skills of POCUS image acquisition. Nevertheless, hospitalists may still find value in understanding POCUS nomenclature, image pattern recognition, and the evidence and pitfalls behind clinical integration of specific POCUS findings. This subset of POCUS skills allows hospitalists to communicate effectively with and understand the clinical decisions made by their colleagues who are competent in POCUS use.

The minimal skills a hospitalist should possess to serve as a POCUS trainer include proficiency of basic knowledge, image acquisition, image interpretation, and clinical integration of the POCUS applications being taught; effectiveness as a hands-on instructor to teach image acquisition skills; and an in-depth understanding of common POCUS pitfalls and limitations.

ASSESSMENTS

Assessment methods for POCUS can include the following: knowledge-based questions, image acquisition using task-specific checklists on human or simulation models, image interpretation using a series of videos or still images with normal and abnormal findings, clinical integration using “next best step” in a multiple choice format with POCUS images, and simulation-based clinical scenarios. Assessment methods should be aligned with local availability of resources and trainers.

Basic Knowledge

Basic knowledge can be assessed via multiple choice questions assessing knowledge of ultrasound physics, image optimization, relevant anatomy, and limitations of POCUS imaging. Basic knowledge lies primarily in the cognitive domain and does not assess manual skills.

Image Acquisition

Image acquisition can be assessed by observation and rating of image quality. Where resources allow, assessment of image acquisition is likely best done through a combination of developing an image portfolio with a minimum number of high quality images, plus direct observation of image acquisition by an expert. Various programs have utilized minimum numbers of images acquired to help define competence with image acquisition skills.6–8 Although minimums may be a necessary step to gain competence, using them as a sole means to determine competence does not account for variable learning curves.9 As with other manual skills in hospital medicine, such as ultrasound-guided bedside procedures, minimum numbers are best used as a starting point for assessments.3,10 In this regard, portfolio development with meticulous attention to the gain, depth, and proper tomographic plane of images can monitor a hospitalist’s progress toward competence by providing objective assessments and feedback. Simulation may also be used as it allows assessment of image acquisition skills and an opportunity to provide real-time feedback, similar to direct observation but without actual patients.

 

 

Image Interpretation

Image interpretation is best assessed by an expert observing the learner at bedside; however, when bedside assessment is not possible, image interpretation skills may be assessed using multiple choice or free text interpretation of archived ultrasound images with normal and abnormal findings. This is often incorporated into the portfolio development portion of a training program, as learners can submit their image interpretation along with the video clip. Both normal and abnormal images can be used to assess anatomic recognition and interpretation. Emphasis should be placed on determining when an image is suboptimal for diagnosis (eg, incomplete exam or poor-quality images). Quality assurance programs should incorporate structured feedback sessions.

Clinical Integration

Assessment of clinical integration can be completed through case scenarios that assess knowledge, interpretation of images, and integration of findings into clinical decision making, which is often delivered via a computer-based assessment. Assessments should combine specific POCUS applications to evaluate common clinical problems in hospital medicine, such as undifferentiated hypotension and dyspnea. High-fidelity simulators can be used to blend clinical case scenarios with image acquisition, image interpretation, and clinical integration. When feasible, comprehensive feedback on how providers acquire, interpret, and apply ultrasound at the bedside is likely the best mechanism to assess clinical integration. This process can be done with a hospitalist’s own patients.

General Assessment

A general assessment that includes a summative knowledge and hands-on skills assessment using task-specific checklists can be performed upon completion of training. A high-fidelity simulator with dynamic or virtual anatomy can provide reproducible standardized assessments with variation in the type and difficulty of cases. When available, we encourage the use of dynamic assessments on actual patients that have both normal and abnormal ultrasound findings because simulated patient scenarios have limitations, even with the use of high-fidelity simulators. Programs are recommended to use formative and summative assessments for evaluation. Quantitative scoring systems using checklists are likely the best framework.11,12

CERTIFICATES AND CERTIFICATION

A certificate of completion is proof of a provider’s participation in an educational activity; it does not equate with competency, though it may be a step toward it. Most POCUS training workshops and short courses provide certificates of completion. Certification of competency is an attestation of a hospitalist’s basic competence within a defined scope of practice (Table 2).13 However, without longitudinal supervision and feedback, skills can decay; therefore, we recommend a longitudinal training program that provides mentored feedback and incorporates periodic competency assessments. At present, no national board certification in POCUS is available to grant external certification of competency for hospitalists.

External Certificate

Certificates of completion can be external through a national organization. An external certificate of completion designed for hospitalists includes the POCUS Certificate of Completion offered by SHM in collaboration with CHEST.6 This certificate program provides regional training options and longitudinal portfolio development. Other external certificates are also available to hospitalists.7,14,15

Most hospitalists are boarded by the American Board of Internal Medicine or the American Board of Family Medicine. These boards do not yet include certification of competency in POCUS. Other specialty boards, such as emergency medicine, include competency in POCUS. For emergency medicine, completion of an accredited residency training program and certification by the national board includes POCUS competency.

 

 

Internal Certificate

There are a few examples of successful local institutional programs that have provided internal certificates of competency.12,14 Competency assessments require significant resources including investment by both faculty and learners. Ongoing evaluation of competency should be based on quality assurance processes.

Credentialing and Privileging

The American Medical Association (AMA) House of Delegates in 1999 passed a resolution (AMA HR. 802) recommending hospitals follow specialty-specific guidelines for privileging decisions related to POCUS use.17 The resolution included a statement that, “ultrasound imaging is within the scope of practice of appropriately trained physicians.”

Some institutions have begun to rely on a combination of internal and external certificate programs to grant privileges to hospitalists.10 Although specific privileges for POCUS may not be required in some hospitals, some institutions may require certification of training and assessments prior to granting permission to use POCUS.

Hospitalist programs are encouraged to evaluate ongoing POCUS use by their providers after granting initial permission. If privileging is instituted by a hospital, hospitalists must play a significant role in determining the requirements for privileging and ongoing maintenance of skills.

Maintenance of Skills

All medical skills can decay with disuse, including those associated with POCUS.12,18 Thus, POCUS users should continue using POCUS regularly in clinical practice and participate in POCUS continuing medical education activities, ideally with ongoing assessments. Maintenance of skills may be confirmed through routine participation in a quality assurance program.

PROGRAM MANAGEMENT

Use of POCUS in hospital medicine has unique considerations, and hospitalists should be integrally involved in decision making surrounding institutional POCUS program management. Appointing a dedicated POCUS director can help a program succeed.8

Equipment and Image Archiving

Several factors are important to consider when selecting an ultrasound machine: portability, screen size, and ease of use; integration with the electronic medical record and options for image archiving; manufacturer’s service plan, including technical and clinical support; and compliance with local infection control policies. The ability to easily archive and retrieve images is essential for quality assurance, continuing education, institutional quality improvement, documentation, and reimbursement. In certain scenarios, image archiving may not be possible (such as with personal handheld devices or in emergency situations) or necessary (such as with frequent serial examinations during fluid resuscitation). An image archive is ideally linked to reports, orders, and billing software.10,19 If such linkages are not feasible, parallel external storage that complies with regulatory standards (ie, HIPAA compliance) may be suitable.20

Documentation and Billing

Components of documentation include the indication and type of ultrasound examination performed, date and time of the examination, patient identifying information, name of provider(s) acquiring and interpreting the images, specific scanning protocols used, patient position, probe used, and findings. Documentation can occur through a standalone note or as part of another note, such as a progress note. Whenever possible, documentation should be timely to facilitate communication with other providers.

Billing is supported through the AMA Current Procedural Terminology codes for “focused” or “limited” ultrasound examinations (Appendix 9). The following three criteria must be satisfied for billing. First, images must be permanently stored. Specific requirements vary by insurance policy, though current practice suggests a minimum of one image demonstrating relevant anatomy and pathology for the ultrasound examination coded. For ultrasound-guided procedures that require needle insertion, images should be captured at the point of interest, and a procedure note should reflect that the needle was guided and visualized under ultrasound.21 Second, proper documentation must be entered in the medical record. Third, local institutional privileges for POCUS must be considered. Although privileges are not required to bill, some hospitals or payers may require them.

 

 

Quality Assurance

Published guidelines on quality assurance in POCUS are available from different specialty organizations, including emergency medicine, pediatric emergency medicine, critical care, anesthesiology, obstetrics, and cardiology.8,22–28 Quality assurance is aimed at ensuring that physicians maintain basic competency in using POCUS to influence bedside decisions.

Quality assurance should be carried out by an individual or committee with expertise in POCUS. Multidisciplinary QA programs in which hospital medicine providers are working collaboratively with other POCUS providers have been demonstrated to be highly effective.10 Oversight includes ensuring that providers using POCUS are appropriately trained,10,22,28 using the equipment correctly,8,26,28 and documenting properly. Some programs have implemented mechanisms to review and provide feedback on image acquisition, interpretation, and clinical integration.8,10 Other programs have compared POCUS findings with referral studies, such as comprehensive ultrasound examinations.

CONCLUSIONS

Practicing hospitalists must continue to collaborate with their institutions to build POCUS capabilities. In particular, they must work with their local privileging body to determine what credentials are required. The distinction between certificates of completion and certificates of competency, including whether those certificates are internal or external, is important in the credentialing process.

External certificates of competency are currently unavailable for most practicing hospitalists because ABIM certification does not include POCUS-related competencies. As internal medicine residency training programs begin to adopt POCUS training and certification into their educational curricula, we foresee a need to update the ABIM Policies and Procedures for Certification. Until then, we recommend that certificates of competency be defined and granted internally by local hospitalist groups.

Given the many advantages of POCUS over traditional tools, we anticipate its increasing implementation among hospitalists in the future. As with all medical technology, its role in clinical care should be continuously reexamined and redefined through health services research. Such information will be useful in developing practice guidelines, educational curricula, and training standards.

Acknowledgments

The authors would like to thank all members that participated in the discussion and finalization of this position statement during the Point-of-care Ultrasound Faculty Retreat at the 2018 Society of Hospital Medicine Annual Conference: Saaid Abdel-Ghani, Brandon Boesch, Joel Cho, Ria Dancel, Renee Dversdal, Ricardo Franco-Sadud, Benjamin Galen, Trevor P. Jensen, Mohit Jindal, Gordon Johnson, Linda M. Kurian, Gigi Liu, Charles M. LoPresti, Brian P. Lucas, Venkat Kalidindi, Benji Matthews, Anna Maw, Gregory Mints, Kreegan Reierson, Gerard Salame, Richard Schildhouse, Daniel Schnobrich, Nilam Soni, Kirk Spencer, Hiromizu Takahashi, David M. Tierney, Tanping Wong, and Toru Yamada.

References

1. Schnobrich DJ, Mathews BK, Trappey BE, Muthyala BK, Olson APJ. Entrusting internal medicine residents to use point of care ultrasound: Towards improved assessment and supervision. Med Teach. 2018:1-6. doi:10.1080/0142159X.2018.1457210.
2. Soni NJ, Lucas BP. Diagnostic point-of-care ultrasound for hospitalists. J Hosp Med. 2015;10(2):120-124. doi:10.1002/jhm.2285.
3. Lucas BP, Tierney DM, Jensen TP, et al. Credentialing of hospitalists in ultrasound-guided bedside procedures: a position statement of the society of hospital medicine. J Hosp Med. 2018;13(2):117-125. doi:10.12788/jhm.2917.
4. Dancel R, Schnobrich D, Puri N, et al. Recommendations on the use of ultrasound guidance for adult thoracentesis: a position statement of the society of hospital medicine. J Hosp Med. 2018;13(2):126-135. doi:10.12788/jhm.2940.
5. National Council on Radiation Protection and Measurements, The Council. Implementation of the Principle of as Low as Reasonably Achievable (ALARA) for Medical and Dental Personnel.; 1990.
6. Society of Hospital Medicine. Point of Care Ultrasound course: https://www.hospitalmedicine.org/clinical-topics/ultrasonography-cert/. Accessed February 6, 2018.
7. Critical Care Ultrasonography Certificate of Completion Program. CHEST. American College of Chest Physicians. http://www.chestnet.org/Education/Advanced-Clinical-Training/Certificate-of-Completion-Program/Critical-Care-Ultrasonography. Accessed February 6, 2018.
8. American College of Emergency Physicians Policy Statement: Emergency Ultrasound Guidelines. 2016. https://www.acep.org/Clinical---Practice-Management/ACEP-Ultrasound-Guidelines/. Accessed February 6, 2018.
9. Blehar DJ, Barton B, Gaspari RJ. Learning curves in emergency ultrasound education. Acad Emerg Med. 2015;22(5):574-582. doi:10.1111/acem.12653.
10. Mathews BK, Zwank M. Hospital medicine point of care ultrasound credentialing: an example protocol. J Hosp Med. 2017;12(9):767-772. doi:10.12788/jhm.2809.
11. Barsuk JH, McGaghie WC, Cohen ER, Balachandran JS, Wayne DB. Use of simulation-based mastery learning to improve the quality of central venous catheter placement in a medical intensive care unit. J Hosp Med. 2009;4(7):397-403. doi:10.1002/jhm.468.
12. Mathews BK, Reierson K, Vuong K, et al. The design and evaluation of the Comprehensive Hospitalist Assessment and Mentorship with Portfolios (CHAMP) ultrasound program. J Hosp Med. 2018;13(8):544-550. doi:10.12788/jhm.2938.
13. Soni NJ, Tierney DM, Jensen TP, Lucas BP. Certification of point-of-care ultrasound competency. J Hosp Med. 2017;12(9):775-776. doi:10.12788/jhm.2812.
14. Ultrasound Certification for Physicians. Alliance for Physician Certification and Advancement. APCA. https://apca.org/. Accessed February 6, 2018.
15. National Board of Echocardiography, Inc. https://www.echoboards.org/EchoBoards/News/2019_Adult_Critical_Care_Echocardiography_Exam.aspx. Accessed June 18, 2018.
16. Tierney DM. Internal Medicine Bedside Ultrasound Program (IMBUS). Abbott Northwestern. http://imbus.anwresidency.com/index.html. Accessed February 6, 2018.
17. American Medical Association House of Delegates Resolution H-230.960: Privileging for Ultrasound Imaging. Resolution 802. Policy Finder Website. http://search0.ama-assn.org/search/pfonline. Published 1999. Accessed February 18, 2018.
18. Kelm D, Ratelle J, Azeem N, et al. Longitudinal ultrasound curriculum improves long-term retention among internal medicine residents. J Grad Med Educ. 2015;7(3):454-457. doi:10.4300/JGME-14-00284.1.
19. Flannigan MJ, Adhikari S. Point-of-care ultrasound work flow innovation: impact on documentation and billing. J Ultrasound Med. 2017;36(12):2467-2474. doi:10.1002/jum.14284.
20. Emergency Ultrasound: Workflow White Paper. https://www.acep.org/uploadedFiles/ACEP/memberCenter/SectionsofMembership/ultra/Workflow%20White%20Paper.pdf. Published 2013. Accessed February 18, 2018.
21. Ultrasound Coding and Reimbursement Document 2009. Emergency Ultrasound Section. American College of Emergency Physicians. http://emergencyultrasoundteaching.com/assets/2009_coding_update.pdf. Published 2009. Accessed February 18, 2018.
22. Mayo PH, Beaulieu Y, Doelken P, et al. American College of Chest Physicians/La Societe de Reanimation de Langue Francaise statement on competence in critical care ultrasonography. Chest. 2009;135(4):1050-1060. doi:10.1378/chest.08-2305.
23. Frankel HL, Kirkpatrick AW, Elbarbary M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients-part I: general ultrasonography. Crit Care Med. 2015;43(11):2479-2502. doi:10.1097/ccm.0000000000001216.
24. Levitov A, Frankel HL, Blaivas M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients-part ii: cardiac ultrasonography. Crit Care Med. 2016;44(6):1206-1227. doi:10.1097/ccm.0000000000001847.
25. ACR–ACOG–AIUM–SRU Practice Parameter for the Performance of Obstetrical Ultrasound. https://www.acr.org/-/media/ACR/Files/Practice-Parameters/us-ob.pdf. Published 2013. Accessed February 18, 2018.
26. AIUM practice guideline for documentation of an ultrasound examination. J Ultrasound Med. 2014;33(6):1098-1102. doi:10.7863/ultra.33.6.1098.
27. Marin JR, Lewiss RE. Point-of-care ultrasonography by pediatric emergency medicine physicians. Pediatrics. 2015;135(4):e1113-e1122. doi:10.1542/peds.2015-0343.
28. Spencer KT, Kimura BJ, Korcarz CE, Pellikka PA, Rahko PS, Siegel RJ. Focused cardiac ultrasound: recommendations from the American Society of Echocardiography. J Am Soc Echocardiogr. 2013;26(6):567-581. doi:10.1016/j.echo.2013.04.001.

References

1. Schnobrich DJ, Mathews BK, Trappey BE, Muthyala BK, Olson APJ. Entrusting internal medicine residents to use point of care ultrasound: Towards improved assessment and supervision. Med Teach. 2018:1-6. doi:10.1080/0142159X.2018.1457210.
2. Soni NJ, Lucas BP. Diagnostic point-of-care ultrasound for hospitalists. J Hosp Med. 2015;10(2):120-124. doi:10.1002/jhm.2285.
3. Lucas BP, Tierney DM, Jensen TP, et al. Credentialing of hospitalists in ultrasound-guided bedside procedures: a position statement of the society of hospital medicine. J Hosp Med. 2018;13(2):117-125. doi:10.12788/jhm.2917.
4. Dancel R, Schnobrich D, Puri N, et al. Recommendations on the use of ultrasound guidance for adult thoracentesis: a position statement of the society of hospital medicine. J Hosp Med. 2018;13(2):126-135. doi:10.12788/jhm.2940.
5. National Council on Radiation Protection and Measurements, The Council. Implementation of the Principle of as Low as Reasonably Achievable (ALARA) for Medical and Dental Personnel.; 1990.
6. Society of Hospital Medicine. Point of Care Ultrasound course: https://www.hospitalmedicine.org/clinical-topics/ultrasonography-cert/. Accessed February 6, 2018.
7. Critical Care Ultrasonography Certificate of Completion Program. CHEST. American College of Chest Physicians. http://www.chestnet.org/Education/Advanced-Clinical-Training/Certificate-of-Completion-Program/Critical-Care-Ultrasonography. Accessed February 6, 2018.
8. American College of Emergency Physicians Policy Statement: Emergency Ultrasound Guidelines. 2016. https://www.acep.org/Clinical---Practice-Management/ACEP-Ultrasound-Guidelines/. Accessed February 6, 2018.
9. Blehar DJ, Barton B, Gaspari RJ. Learning curves in emergency ultrasound education. Acad Emerg Med. 2015;22(5):574-582. doi:10.1111/acem.12653.
10. Mathews BK, Zwank M. Hospital medicine point of care ultrasound credentialing: an example protocol. J Hosp Med. 2017;12(9):767-772. doi:10.12788/jhm.2809.
11. Barsuk JH, McGaghie WC, Cohen ER, Balachandran JS, Wayne DB. Use of simulation-based mastery learning to improve the quality of central venous catheter placement in a medical intensive care unit. J Hosp Med. 2009;4(7):397-403. doi:10.1002/jhm.468.
12. Mathews BK, Reierson K, Vuong K, et al. The design and evaluation of the Comprehensive Hospitalist Assessment and Mentorship with Portfolios (CHAMP) ultrasound program. J Hosp Med. 2018;13(8):544-550. doi:10.12788/jhm.2938.
13. Soni NJ, Tierney DM, Jensen TP, Lucas BP. Certification of point-of-care ultrasound competency. J Hosp Med. 2017;12(9):775-776. doi:10.12788/jhm.2812.
14. Ultrasound Certification for Physicians. Alliance for Physician Certification and Advancement. APCA. https://apca.org/. Accessed February 6, 2018.
15. National Board of Echocardiography, Inc. https://www.echoboards.org/EchoBoards/News/2019_Adult_Critical_Care_Echocardiography_Exam.aspx. Accessed June 18, 2018.
16. Tierney DM. Internal Medicine Bedside Ultrasound Program (IMBUS). Abbott Northwestern. http://imbus.anwresidency.com/index.html. Accessed February 6, 2018.
17. American Medical Association House of Delegates Resolution H-230.960: Privileging for Ultrasound Imaging. Resolution 802. Policy Finder Website. http://search0.ama-assn.org/search/pfonline. Published 1999. Accessed February 18, 2018.
18. Kelm D, Ratelle J, Azeem N, et al. Longitudinal ultrasound curriculum improves long-term retention among internal medicine residents. J Grad Med Educ. 2015;7(3):454-457. doi:10.4300/JGME-14-00284.1.
19. Flannigan MJ, Adhikari S. Point-of-care ultrasound work flow innovation: impact on documentation and billing. J Ultrasound Med. 2017;36(12):2467-2474. doi:10.1002/jum.14284.
20. Emergency Ultrasound: Workflow White Paper. https://www.acep.org/uploadedFiles/ACEP/memberCenter/SectionsofMembership/ultra/Workflow%20White%20Paper.pdf. Published 2013. Accessed February 18, 2018.
21. Ultrasound Coding and Reimbursement Document 2009. Emergency Ultrasound Section. American College of Emergency Physicians. http://emergencyultrasoundteaching.com/assets/2009_coding_update.pdf. Published 2009. Accessed February 18, 2018.
22. Mayo PH, Beaulieu Y, Doelken P, et al. American College of Chest Physicians/La Societe de Reanimation de Langue Francaise statement on competence in critical care ultrasonography. Chest. 2009;135(4):1050-1060. doi:10.1378/chest.08-2305.
23. Frankel HL, Kirkpatrick AW, Elbarbary M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients-part I: general ultrasonography. Crit Care Med. 2015;43(11):2479-2502. doi:10.1097/ccm.0000000000001216.
24. Levitov A, Frankel HL, Blaivas M, et al. Guidelines for the appropriate use of bedside general and cardiac ultrasonography in the evaluation of critically ill patients-part ii: cardiac ultrasonography. Crit Care Med. 2016;44(6):1206-1227. doi:10.1097/ccm.0000000000001847.
25. ACR–ACOG–AIUM–SRU Practice Parameter for the Performance of Obstetrical Ultrasound. https://www.acr.org/-/media/ACR/Files/Practice-Parameters/us-ob.pdf. Published 2013. Accessed February 18, 2018.
26. AIUM practice guideline for documentation of an ultrasound examination. J Ultrasound Med. 2014;33(6):1098-1102. doi:10.7863/ultra.33.6.1098.
27. Marin JR, Lewiss RE. Point-of-care ultrasonography by pediatric emergency medicine physicians. Pediatrics. 2015;135(4):e1113-e1122. doi:10.1542/peds.2015-0343.
28. Spencer KT, Kimura BJ, Korcarz CE, Pellikka PA, Rahko PS, Siegel RJ. Focused cardiac ultrasound: recommendations from the American Society of Echocardiography. J Am Soc Echocardiogr. 2013;26(6):567-581. doi:10.1016/j.echo.2013.04.001.

Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Citation Override
Published Online Only January 2, 2019. doi: 10.12788/jhm.3079
Disallow All Ads
Correspondence Location
Corresponding Author: Nilam J. Soni, MD MS; E-mail: sonin@uthscsa.edu; Telephone: 210-743-6030.
Content Gating
Open Access (article Unlocked/Open Access)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Article PDF Media
Media Files

Trends in Inpatient Admission Comorbidity and Electronic Health Data: Implications for Resident Workload Intensity

Article Type
Changed
Sun, 08/19/2018 - 21:33

Since the Accreditation Council for Graduate Medical Education (ACGME) posed new duty hour regulations in 2003 and again in 2011, there have been concerns that the substantial compression of resident workload may have resulted in a negative learning environment.1-3 Residents are now expected to complete more work in a reduced amount of time and with less flexibility.4 In addition to time constraints, the actual work of a resident today may differ from that of a resident in the past, especially in the area of clinical documentation.5 Restricting resident work hours without examining the workload may result in increased work intensity and counter the potential benefits of working fewer hours.6 Measuring workload, as well as electronic health record (EHR)–related stress, may also help combat burnout in internal medicine.7 There are many components that influence resident workload, including patient census, patient comorbidities and acuity,EHR data and other available documentation, and ancillary tasks and procedures.7 We define resident workload intensity as the responsibilities required to provide patient care within a specified time. There is a paucity of objective data regarding the workload intensity of residents, which are essential to graduate medical education reform and optimization. Patient census, ancillary responsibilities, number of procedures, and conference length and frequency are some of the variables that can be adjusted by each residency program. As a first step to objective measurement of resident workload intensity, we endeavored to evaluate the less easily residency program–controlled workload components of patient comorbidity and EHR data the time of patient admission.

METHODS

We conducted an observational, retrospective assessment of all admissions to the Louis Stokes Cleveland VA Medical Center (LSCVAMC) internal medicine service from January 1, 2000 to December 31, 2015. The inclusion criteria were admission to non-ICU internal medicine services and an admission note written by a resident physician. Otherwise, there were no exclusions. Data were accessed using VA Informatics and Computing Infrastructure. This study was approved by the LSCVAMC institutional review board.

We evaluated multiple patient characteristics for each admission that were accessible in the EHR at the time of hospital admission including patient comorbidities, medication count, and number of notes and discharge summaries. The Charlson Comorbidity Index (CCI) Deyo version was used to score all patients based on the EHR’s active problem list at the time of admission.8,9 The CCI is a validated score created by categorizing comorbidities using International Classification of Diseases, Ninth and Tenth Revisions.8 Higher CCI scores predict increased mortality and resource usage. For each admission, we also counted the number of active medications, the number of prior discharge summaries, and the total number of notes available in the EHR at the time of patient admission. Patient admissions were grouped by calendar year, the mean numbers of active medications, prior discharge summaries, and total available notes per patient during each year were calculated (Table). Data comparisons were completed between 2003 and 2011 as well as between 2011 and 2015; median data are also provided for these years (Table). These years were chosen based on the years of the duty hour changes as well as comparing a not brand new, but still immature EHR (2003), a mature EHR (2011), and the most recent available data (2015).

RESULTS

A total of 67,346 admissions were included in the analysis. All parameters increased from 2000 to 2015. Mean CCI increased from 1.60 in 2003 (95% CI, 1.54–1.65) to 3.05 in 2011 (95% CI, 2.97–3.13) and to 3.77 in 2015 (95% CI, 3.67–3.87). Mean number of comorbidities increased from 6.21 in 2003 (95% CI, 6.05–6.36) to 16.09 in 2011 (95% CI, 15.84–16.34) and to 19.89 in 2015 (95% CI, 19.57–20.21). Mean number of notes increased from 193 in 2003 (95% CI, 186–199) to 841 in 2011 (95% CI, 815–868) and to 1289 in 2015 (95% CI, 1243–1335). Mean number of medications increased from 8.37 in 2003 (95% CI, 8.15–8.59) to 16.89 in 2011 (95% CI 16.60–17.20) and decreased to 16.49 in 2015 (95% CI, 16.18–16.80). Mean number of discharge summaries available at admission increased from 2.29 in 2003 (95% CI, 2.19–2.38) to 4.42 in 2011 (95% CI, 4.27–4.58) and to 5.48 in 2015 (95% CI, 5.27–5.69).

 

 

DISCUSSION

This retrospective, observational study shows that patient comorbidity and EHR data burden have increased over time, both of which impact resident workload at the time of admission. These findings, combined with the duty hour regulations, suggest that resident workload intensity at the time of admission may be increasing over time.

Patient comorbidity has likely increased due to a combination of factors. Elective admissions have decreased, and demographics have changed consistent with an aging population. Trainee admissions patterns also have changed over time, with less-acute admissions often admitted to nonacademic providers. Additionally, there are more stringent requirements for inpatient admissions, resulting in higher acuity and comorbidity.

As EHRs have matured and documentation requirements have expanded, the amount of electronic data has grown per patient, substantially increasing the time required to review a patient’s medical record.5,10 In our evaluation, all EHR metrics increased between 2003 and 2011. The only metric that did not increase between 2011 and 2015 was the mean number of medications. The number of notes per patient has shown a dramatic increase. Even in an EHR that has reached maturity (in use more than 10 years), the number of notes per patient still increased by greater than 50% between 2011 and 2015. The VA EHR has been in use for more than 15 years, making it an ideal resource to study data trends. As many EHRs are in their infancy in comparison, these data may serve as a predictor of how other EHRs will mature. While all notes are not reviewed at every admission, this illustrates how increasing data burden combined with poor usability can be time consuming and promote inefficient patient care.11 Moreover, many argue that poor EHR usability also affects cognitive workflow and clinical decision making, a task that is of utmost value to patient quality and safety as well as resident education.12Common program requirements for internal medicine as set forth by the ACGME state that residency programs should give adequate attention to scheduling, work intensity, and work compression to optimize resident well-being and prevent burnout.13 Resident workload intensity is multifaceted and encompasses many elements, including patient census and acuity, EHR data assessment, components of patient complexity such as comorbidity and psychosocial situation, and time.13 The work intensity increases with increase in the overall patient census, complexity, acuity, or data burden. Similarly, work intensity increases with time restrictions for patient care (in the form of duty hours). In addition, work intensity is affected by the time allotted for nonclinical responsibilities, such as morning reports and conferences, as these decrease the amount of time a resident can spend providing patient care.

Many programs have responded to the duty hour restrictions by decreasing patient caps.14 Our data suggest that decreasing patient census alone may not adequately mitigate the workload intensity of residents. There are other alternatives to prevent the increasing workload intensity that may have already been employed by some institutions. One such method is that programs can take into account patient complexity or acuity when allocating patients to teaching teams.14 Another method is to adjust the time spent on ancillary tasks such as obtaining outside hospital records, transporting patients, and scheduling follow-up appointments. Foregoing routine conferences such as morning reports or noon conferences would decrease work intensity, although obviously at the expense of resident education. Geographic rounding can encourage more efficient use of clinical time. One of the most difficult, but potentially impactful strategies would be to streamline EHRs to simplify and speed documentation, refocus regulations, and support and build based on the view of clinicians.15

The main limitations of this study include its retrospective design, single-center site, and focus on the internal medicine admissions to a VA hospital. Therefore, these findings may not be generalizable to other patient populations and training programs. Another potential limitation may be that changes in documentation practices have led to “upcoding” of patient comorbidy within the EHR. In addition, in this study, we looked only at the data available at the time of admission. To get a more complete picture of true workload intensity, understanding the day-to-day metrics of inpatient care would be crucial.

CONCLUSION

Our study demonstrates that components of resident workload (patient comorbidity and EHR data burden), specifically at the time of admission, have increased over time. These findings, combined with the duty hour regulations, suggest resident workload intensity at the time of admission has increased over time. This can have significant implications regarding graduate medical education, patient safety, and burnout. To optimize resident workload, innovation will be required in the areas of workflow, informatics, and curriculum. Future studies to assess the workload and intensity of the course of the entire patient hospitalization are needed.

 

 

Acknowledgments

The authors thank Paul E. Drawz, MD, MHS, MS (University of Minnesota) for contributions in designing and reviewing the study.

Ethical approval: The study was approved by the Institutional Review Board at the LSCVAMC. The contents do not represent the views of the U.S. Department of Veterans Affairs or the U.S. government. This material is the result of work supported with resources and the use of facilities of the LSCVAMC.

Disclosures

The authors declare that they have no conflicts of interest to disclose.

References

1. Bolster L, Rourke L. The Effect of Restricting Residents’ Duty Hours on Patient Safety, Resident Well-Being, and Resident Education: An Updated Systematic Review. J Grad Med Educ. 2015;7(3):349-363. PubMed
2. Fletcher KE, Underwood W, Davis SQ, Mangrulkar RS, McMahon LF, Saint S. Effects of work hour reduction on residents’ lives: a systematic review. JAMA. 2005; 294(9):1088-1100. PubMed
3. Amin A, Choe J, Collichio F, et al. Resident Duty Hours: An Alliance for Academic Internal Medicine Position Paper. http://www.im.org/d/do/6967. Published February 2016. Accessed November 30, 2017.
4. Goitein L, Ludmerer KM. Resident workload-let’s treat the disease, not just the symptom. JAMA Intern Med. 2013;173(8):655-656. PubMed
5. Oxentenko AS, West CP, Popkave C, Weinberger SE, Kolars JC. Time spent on clinical documentation: a survey of internal medicine residents and program directors. Arch Intern Med. 2010;170(4):377-380. PubMed
6. Fletcher KE, Reed DA, Arora VM. Doing the dirty work: measuring and optimizing resident workload. J Gen Intern Med. 2011;26(1):8-9. PubMed
7. Linzer M, Levine R, Meltzer D, Poplau S, Warde C, West CP. 10 bold steps to prevent burnout in general internal medicine. J Gen Intern Med. 2014;29(1):18-20. PubMed
8. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373-383. PubMed
9. Deyo RA, Cherkin DC, Ciol MA. Adapting a clinical comorbidity index for use with ICD-9-CM administrative databases. J Clin Epidemiol. 1992;45(6):613-619. PubMed
10. Kuhn T, Basch P, Barr M, Yackel T, et al; Physicians MICotACo. Clinical documentation in the 21st century: executive summary of a policy position paper from the American College of Physicians. Ann Intern Med. 2015;162(4):301-303. PubMed
11. Friedberg MW, Chen PG, Van Busum KR, et al. Factors Affecting Physician Professional Satisfaction and Their Implications for Patient Care, Health Systems, and Health Policy. Rand Health Q. 2014;3(4):1. PubMed
12. Smith SW, Koppel R. Healthcare information technology’s relativity problems: a typology of how patients’ physical reality, clinicians’ mental models, and healthcare information technology differ. J Am Med Inform Assoc. 2014; 21(1):117-131. PubMed
13. ACGME Program Requirements for Graduate Medical Education in Internal Medicine. http://www.acgme.org/Portals/0/PFAssets/ProgramRequirements/140_internal_medicine_2017-07-01.pdf. Revised July 1, 2017. Accessed July 22, 2017.
14. Thanarajasingam U, McDonald FS, Halvorsen AJ, et al. Service census caps and unit-based admissions: resident workload, conference attendance, duty hour compliance, and patient safety. Mayo Clin Proc. 2012;87(4):320-327. PubMed
15. Payne TH, Corley S, Cullen TA, et al. Report of the AMIA EHR-2020 Task Force on the status and future direction of EHRs. J Am Med Inform Assoc. 2015;22(5):1102-1110. PubMed

Article PDF
Issue
Journal of Hospital Medicine 13(8)
Publications
Topics
Page Number
570-572. Published online first March 26, 2018
Sections
Article PDF
Article PDF
Related Articles

Since the Accreditation Council for Graduate Medical Education (ACGME) posed new duty hour regulations in 2003 and again in 2011, there have been concerns that the substantial compression of resident workload may have resulted in a negative learning environment.1-3 Residents are now expected to complete more work in a reduced amount of time and with less flexibility.4 In addition to time constraints, the actual work of a resident today may differ from that of a resident in the past, especially in the area of clinical documentation.5 Restricting resident work hours without examining the workload may result in increased work intensity and counter the potential benefits of working fewer hours.6 Measuring workload, as well as electronic health record (EHR)–related stress, may also help combat burnout in internal medicine.7 There are many components that influence resident workload, including patient census, patient comorbidities and acuity,EHR data and other available documentation, and ancillary tasks and procedures.7 We define resident workload intensity as the responsibilities required to provide patient care within a specified time. There is a paucity of objective data regarding the workload intensity of residents, which are essential to graduate medical education reform and optimization. Patient census, ancillary responsibilities, number of procedures, and conference length and frequency are some of the variables that can be adjusted by each residency program. As a first step to objective measurement of resident workload intensity, we endeavored to evaluate the less easily residency program–controlled workload components of patient comorbidity and EHR data the time of patient admission.

METHODS

We conducted an observational, retrospective assessment of all admissions to the Louis Stokes Cleveland VA Medical Center (LSCVAMC) internal medicine service from January 1, 2000 to December 31, 2015. The inclusion criteria were admission to non-ICU internal medicine services and an admission note written by a resident physician. Otherwise, there were no exclusions. Data were accessed using VA Informatics and Computing Infrastructure. This study was approved by the LSCVAMC institutional review board.

We evaluated multiple patient characteristics for each admission that were accessible in the EHR at the time of hospital admission including patient comorbidities, medication count, and number of notes and discharge summaries. The Charlson Comorbidity Index (CCI) Deyo version was used to score all patients based on the EHR’s active problem list at the time of admission.8,9 The CCI is a validated score created by categorizing comorbidities using International Classification of Diseases, Ninth and Tenth Revisions.8 Higher CCI scores predict increased mortality and resource usage. For each admission, we also counted the number of active medications, the number of prior discharge summaries, and the total number of notes available in the EHR at the time of patient admission. Patient admissions were grouped by calendar year, the mean numbers of active medications, prior discharge summaries, and total available notes per patient during each year were calculated (Table). Data comparisons were completed between 2003 and 2011 as well as between 2011 and 2015; median data are also provided for these years (Table). These years were chosen based on the years of the duty hour changes as well as comparing a not brand new, but still immature EHR (2003), a mature EHR (2011), and the most recent available data (2015).

RESULTS

A total of 67,346 admissions were included in the analysis. All parameters increased from 2000 to 2015. Mean CCI increased from 1.60 in 2003 (95% CI, 1.54–1.65) to 3.05 in 2011 (95% CI, 2.97–3.13) and to 3.77 in 2015 (95% CI, 3.67–3.87). Mean number of comorbidities increased from 6.21 in 2003 (95% CI, 6.05–6.36) to 16.09 in 2011 (95% CI, 15.84–16.34) and to 19.89 in 2015 (95% CI, 19.57–20.21). Mean number of notes increased from 193 in 2003 (95% CI, 186–199) to 841 in 2011 (95% CI, 815–868) and to 1289 in 2015 (95% CI, 1243–1335). Mean number of medications increased from 8.37 in 2003 (95% CI, 8.15–8.59) to 16.89 in 2011 (95% CI 16.60–17.20) and decreased to 16.49 in 2015 (95% CI, 16.18–16.80). Mean number of discharge summaries available at admission increased from 2.29 in 2003 (95% CI, 2.19–2.38) to 4.42 in 2011 (95% CI, 4.27–4.58) and to 5.48 in 2015 (95% CI, 5.27–5.69).

 

 

DISCUSSION

This retrospective, observational study shows that patient comorbidity and EHR data burden have increased over time, both of which impact resident workload at the time of admission. These findings, combined with the duty hour regulations, suggest that resident workload intensity at the time of admission may be increasing over time.

Patient comorbidity has likely increased due to a combination of factors. Elective admissions have decreased, and demographics have changed consistent with an aging population. Trainee admissions patterns also have changed over time, with less-acute admissions often admitted to nonacademic providers. Additionally, there are more stringent requirements for inpatient admissions, resulting in higher acuity and comorbidity.

As EHRs have matured and documentation requirements have expanded, the amount of electronic data has grown per patient, substantially increasing the time required to review a patient’s medical record.5,10 In our evaluation, all EHR metrics increased between 2003 and 2011. The only metric that did not increase between 2011 and 2015 was the mean number of medications. The number of notes per patient has shown a dramatic increase. Even in an EHR that has reached maturity (in use more than 10 years), the number of notes per patient still increased by greater than 50% between 2011 and 2015. The VA EHR has been in use for more than 15 years, making it an ideal resource to study data trends. As many EHRs are in their infancy in comparison, these data may serve as a predictor of how other EHRs will mature. While all notes are not reviewed at every admission, this illustrates how increasing data burden combined with poor usability can be time consuming and promote inefficient patient care.11 Moreover, many argue that poor EHR usability also affects cognitive workflow and clinical decision making, a task that is of utmost value to patient quality and safety as well as resident education.12Common program requirements for internal medicine as set forth by the ACGME state that residency programs should give adequate attention to scheduling, work intensity, and work compression to optimize resident well-being and prevent burnout.13 Resident workload intensity is multifaceted and encompasses many elements, including patient census and acuity, EHR data assessment, components of patient complexity such as comorbidity and psychosocial situation, and time.13 The work intensity increases with increase in the overall patient census, complexity, acuity, or data burden. Similarly, work intensity increases with time restrictions for patient care (in the form of duty hours). In addition, work intensity is affected by the time allotted for nonclinical responsibilities, such as morning reports and conferences, as these decrease the amount of time a resident can spend providing patient care.

Many programs have responded to the duty hour restrictions by decreasing patient caps.14 Our data suggest that decreasing patient census alone may not adequately mitigate the workload intensity of residents. There are other alternatives to prevent the increasing workload intensity that may have already been employed by some institutions. One such method is that programs can take into account patient complexity or acuity when allocating patients to teaching teams.14 Another method is to adjust the time spent on ancillary tasks such as obtaining outside hospital records, transporting patients, and scheduling follow-up appointments. Foregoing routine conferences such as morning reports or noon conferences would decrease work intensity, although obviously at the expense of resident education. Geographic rounding can encourage more efficient use of clinical time. One of the most difficult, but potentially impactful strategies would be to streamline EHRs to simplify and speed documentation, refocus regulations, and support and build based on the view of clinicians.15

The main limitations of this study include its retrospective design, single-center site, and focus on the internal medicine admissions to a VA hospital. Therefore, these findings may not be generalizable to other patient populations and training programs. Another potential limitation may be that changes in documentation practices have led to “upcoding” of patient comorbidy within the EHR. In addition, in this study, we looked only at the data available at the time of admission. To get a more complete picture of true workload intensity, understanding the day-to-day metrics of inpatient care would be crucial.

CONCLUSION

Our study demonstrates that components of resident workload (patient comorbidity and EHR data burden), specifically at the time of admission, have increased over time. These findings, combined with the duty hour regulations, suggest resident workload intensity at the time of admission has increased over time. This can have significant implications regarding graduate medical education, patient safety, and burnout. To optimize resident workload, innovation will be required in the areas of workflow, informatics, and curriculum. Future studies to assess the workload and intensity of the course of the entire patient hospitalization are needed.

 

 

Acknowledgments

The authors thank Paul E. Drawz, MD, MHS, MS (University of Minnesota) for contributions in designing and reviewing the study.

Ethical approval: The study was approved by the Institutional Review Board at the LSCVAMC. The contents do not represent the views of the U.S. Department of Veterans Affairs or the U.S. government. This material is the result of work supported with resources and the use of facilities of the LSCVAMC.

Disclosures

The authors declare that they have no conflicts of interest to disclose.

Since the Accreditation Council for Graduate Medical Education (ACGME) posed new duty hour regulations in 2003 and again in 2011, there have been concerns that the substantial compression of resident workload may have resulted in a negative learning environment.1-3 Residents are now expected to complete more work in a reduced amount of time and with less flexibility.4 In addition to time constraints, the actual work of a resident today may differ from that of a resident in the past, especially in the area of clinical documentation.5 Restricting resident work hours without examining the workload may result in increased work intensity and counter the potential benefits of working fewer hours.6 Measuring workload, as well as electronic health record (EHR)–related stress, may also help combat burnout in internal medicine.7 There are many components that influence resident workload, including patient census, patient comorbidities and acuity,EHR data and other available documentation, and ancillary tasks and procedures.7 We define resident workload intensity as the responsibilities required to provide patient care within a specified time. There is a paucity of objective data regarding the workload intensity of residents, which are essential to graduate medical education reform and optimization. Patient census, ancillary responsibilities, number of procedures, and conference length and frequency are some of the variables that can be adjusted by each residency program. As a first step to objective measurement of resident workload intensity, we endeavored to evaluate the less easily residency program–controlled workload components of patient comorbidity and EHR data the time of patient admission.

METHODS

We conducted an observational, retrospective assessment of all admissions to the Louis Stokes Cleveland VA Medical Center (LSCVAMC) internal medicine service from January 1, 2000 to December 31, 2015. The inclusion criteria were admission to non-ICU internal medicine services and an admission note written by a resident physician. Otherwise, there were no exclusions. Data were accessed using VA Informatics and Computing Infrastructure. This study was approved by the LSCVAMC institutional review board.

We evaluated multiple patient characteristics for each admission that were accessible in the EHR at the time of hospital admission including patient comorbidities, medication count, and number of notes and discharge summaries. The Charlson Comorbidity Index (CCI) Deyo version was used to score all patients based on the EHR’s active problem list at the time of admission.8,9 The CCI is a validated score created by categorizing comorbidities using International Classification of Diseases, Ninth and Tenth Revisions.8 Higher CCI scores predict increased mortality and resource usage. For each admission, we also counted the number of active medications, the number of prior discharge summaries, and the total number of notes available in the EHR at the time of patient admission. Patient admissions were grouped by calendar year, the mean numbers of active medications, prior discharge summaries, and total available notes per patient during each year were calculated (Table). Data comparisons were completed between 2003 and 2011 as well as between 2011 and 2015; median data are also provided for these years (Table). These years were chosen based on the years of the duty hour changes as well as comparing a not brand new, but still immature EHR (2003), a mature EHR (2011), and the most recent available data (2015).

RESULTS

A total of 67,346 admissions were included in the analysis. All parameters increased from 2000 to 2015. Mean CCI increased from 1.60 in 2003 (95% CI, 1.54–1.65) to 3.05 in 2011 (95% CI, 2.97–3.13) and to 3.77 in 2015 (95% CI, 3.67–3.87). Mean number of comorbidities increased from 6.21 in 2003 (95% CI, 6.05–6.36) to 16.09 in 2011 (95% CI, 15.84–16.34) and to 19.89 in 2015 (95% CI, 19.57–20.21). Mean number of notes increased from 193 in 2003 (95% CI, 186–199) to 841 in 2011 (95% CI, 815–868) and to 1289 in 2015 (95% CI, 1243–1335). Mean number of medications increased from 8.37 in 2003 (95% CI, 8.15–8.59) to 16.89 in 2011 (95% CI 16.60–17.20) and decreased to 16.49 in 2015 (95% CI, 16.18–16.80). Mean number of discharge summaries available at admission increased from 2.29 in 2003 (95% CI, 2.19–2.38) to 4.42 in 2011 (95% CI, 4.27–4.58) and to 5.48 in 2015 (95% CI, 5.27–5.69).

 

 

DISCUSSION

This retrospective, observational study shows that patient comorbidity and EHR data burden have increased over time, both of which impact resident workload at the time of admission. These findings, combined with the duty hour regulations, suggest that resident workload intensity at the time of admission may be increasing over time.

Patient comorbidity has likely increased due to a combination of factors. Elective admissions have decreased, and demographics have changed consistent with an aging population. Trainee admissions patterns also have changed over time, with less-acute admissions often admitted to nonacademic providers. Additionally, there are more stringent requirements for inpatient admissions, resulting in higher acuity and comorbidity.

As EHRs have matured and documentation requirements have expanded, the amount of electronic data has grown per patient, substantially increasing the time required to review a patient’s medical record.5,10 In our evaluation, all EHR metrics increased between 2003 and 2011. The only metric that did not increase between 2011 and 2015 was the mean number of medications. The number of notes per patient has shown a dramatic increase. Even in an EHR that has reached maturity (in use more than 10 years), the number of notes per patient still increased by greater than 50% between 2011 and 2015. The VA EHR has been in use for more than 15 years, making it an ideal resource to study data trends. As many EHRs are in their infancy in comparison, these data may serve as a predictor of how other EHRs will mature. While all notes are not reviewed at every admission, this illustrates how increasing data burden combined with poor usability can be time consuming and promote inefficient patient care.11 Moreover, many argue that poor EHR usability also affects cognitive workflow and clinical decision making, a task that is of utmost value to patient quality and safety as well as resident education.12Common program requirements for internal medicine as set forth by the ACGME state that residency programs should give adequate attention to scheduling, work intensity, and work compression to optimize resident well-being and prevent burnout.13 Resident workload intensity is multifaceted and encompasses many elements, including patient census and acuity, EHR data assessment, components of patient complexity such as comorbidity and psychosocial situation, and time.13 The work intensity increases with increase in the overall patient census, complexity, acuity, or data burden. Similarly, work intensity increases with time restrictions for patient care (in the form of duty hours). In addition, work intensity is affected by the time allotted for nonclinical responsibilities, such as morning reports and conferences, as these decrease the amount of time a resident can spend providing patient care.

Many programs have responded to the duty hour restrictions by decreasing patient caps.14 Our data suggest that decreasing patient census alone may not adequately mitigate the workload intensity of residents. There are other alternatives to prevent the increasing workload intensity that may have already been employed by some institutions. One such method is that programs can take into account patient complexity or acuity when allocating patients to teaching teams.14 Another method is to adjust the time spent on ancillary tasks such as obtaining outside hospital records, transporting patients, and scheduling follow-up appointments. Foregoing routine conferences such as morning reports or noon conferences would decrease work intensity, although obviously at the expense of resident education. Geographic rounding can encourage more efficient use of clinical time. One of the most difficult, but potentially impactful strategies would be to streamline EHRs to simplify and speed documentation, refocus regulations, and support and build based on the view of clinicians.15

The main limitations of this study include its retrospective design, single-center site, and focus on the internal medicine admissions to a VA hospital. Therefore, these findings may not be generalizable to other patient populations and training programs. Another potential limitation may be that changes in documentation practices have led to “upcoding” of patient comorbidy within the EHR. In addition, in this study, we looked only at the data available at the time of admission. To get a more complete picture of true workload intensity, understanding the day-to-day metrics of inpatient care would be crucial.

CONCLUSION

Our study demonstrates that components of resident workload (patient comorbidity and EHR data burden), specifically at the time of admission, have increased over time. These findings, combined with the duty hour regulations, suggest resident workload intensity at the time of admission has increased over time. This can have significant implications regarding graduate medical education, patient safety, and burnout. To optimize resident workload, innovation will be required in the areas of workflow, informatics, and curriculum. Future studies to assess the workload and intensity of the course of the entire patient hospitalization are needed.

 

 

Acknowledgments

The authors thank Paul E. Drawz, MD, MHS, MS (University of Minnesota) for contributions in designing and reviewing the study.

Ethical approval: The study was approved by the Institutional Review Board at the LSCVAMC. The contents do not represent the views of the U.S. Department of Veterans Affairs or the U.S. government. This material is the result of work supported with resources and the use of facilities of the LSCVAMC.

Disclosures

The authors declare that they have no conflicts of interest to disclose.

References

1. Bolster L, Rourke L. The Effect of Restricting Residents’ Duty Hours on Patient Safety, Resident Well-Being, and Resident Education: An Updated Systematic Review. J Grad Med Educ. 2015;7(3):349-363. PubMed
2. Fletcher KE, Underwood W, Davis SQ, Mangrulkar RS, McMahon LF, Saint S. Effects of work hour reduction on residents’ lives: a systematic review. JAMA. 2005; 294(9):1088-1100. PubMed
3. Amin A, Choe J, Collichio F, et al. Resident Duty Hours: An Alliance for Academic Internal Medicine Position Paper. http://www.im.org/d/do/6967. Published February 2016. Accessed November 30, 2017.
4. Goitein L, Ludmerer KM. Resident workload-let’s treat the disease, not just the symptom. JAMA Intern Med. 2013;173(8):655-656. PubMed
5. Oxentenko AS, West CP, Popkave C, Weinberger SE, Kolars JC. Time spent on clinical documentation: a survey of internal medicine residents and program directors. Arch Intern Med. 2010;170(4):377-380. PubMed
6. Fletcher KE, Reed DA, Arora VM. Doing the dirty work: measuring and optimizing resident workload. J Gen Intern Med. 2011;26(1):8-9. PubMed
7. Linzer M, Levine R, Meltzer D, Poplau S, Warde C, West CP. 10 bold steps to prevent burnout in general internal medicine. J Gen Intern Med. 2014;29(1):18-20. PubMed
8. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373-383. PubMed
9. Deyo RA, Cherkin DC, Ciol MA. Adapting a clinical comorbidity index for use with ICD-9-CM administrative databases. J Clin Epidemiol. 1992;45(6):613-619. PubMed
10. Kuhn T, Basch P, Barr M, Yackel T, et al; Physicians MICotACo. Clinical documentation in the 21st century: executive summary of a policy position paper from the American College of Physicians. Ann Intern Med. 2015;162(4):301-303. PubMed
11. Friedberg MW, Chen PG, Van Busum KR, et al. Factors Affecting Physician Professional Satisfaction and Their Implications for Patient Care, Health Systems, and Health Policy. Rand Health Q. 2014;3(4):1. PubMed
12. Smith SW, Koppel R. Healthcare information technology’s relativity problems: a typology of how patients’ physical reality, clinicians’ mental models, and healthcare information technology differ. J Am Med Inform Assoc. 2014; 21(1):117-131. PubMed
13. ACGME Program Requirements for Graduate Medical Education in Internal Medicine. http://www.acgme.org/Portals/0/PFAssets/ProgramRequirements/140_internal_medicine_2017-07-01.pdf. Revised July 1, 2017. Accessed July 22, 2017.
14. Thanarajasingam U, McDonald FS, Halvorsen AJ, et al. Service census caps and unit-based admissions: resident workload, conference attendance, duty hour compliance, and patient safety. Mayo Clin Proc. 2012;87(4):320-327. PubMed
15. Payne TH, Corley S, Cullen TA, et al. Report of the AMIA EHR-2020 Task Force on the status and future direction of EHRs. J Am Med Inform Assoc. 2015;22(5):1102-1110. PubMed

References

1. Bolster L, Rourke L. The Effect of Restricting Residents’ Duty Hours on Patient Safety, Resident Well-Being, and Resident Education: An Updated Systematic Review. J Grad Med Educ. 2015;7(3):349-363. PubMed
2. Fletcher KE, Underwood W, Davis SQ, Mangrulkar RS, McMahon LF, Saint S. Effects of work hour reduction on residents’ lives: a systematic review. JAMA. 2005; 294(9):1088-1100. PubMed
3. Amin A, Choe J, Collichio F, et al. Resident Duty Hours: An Alliance for Academic Internal Medicine Position Paper. http://www.im.org/d/do/6967. Published February 2016. Accessed November 30, 2017.
4. Goitein L, Ludmerer KM. Resident workload-let’s treat the disease, not just the symptom. JAMA Intern Med. 2013;173(8):655-656. PubMed
5. Oxentenko AS, West CP, Popkave C, Weinberger SE, Kolars JC. Time spent on clinical documentation: a survey of internal medicine residents and program directors. Arch Intern Med. 2010;170(4):377-380. PubMed
6. Fletcher KE, Reed DA, Arora VM. Doing the dirty work: measuring and optimizing resident workload. J Gen Intern Med. 2011;26(1):8-9. PubMed
7. Linzer M, Levine R, Meltzer D, Poplau S, Warde C, West CP. 10 bold steps to prevent burnout in general internal medicine. J Gen Intern Med. 2014;29(1):18-20. PubMed
8. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373-383. PubMed
9. Deyo RA, Cherkin DC, Ciol MA. Adapting a clinical comorbidity index for use with ICD-9-CM administrative databases. J Clin Epidemiol. 1992;45(6):613-619. PubMed
10. Kuhn T, Basch P, Barr M, Yackel T, et al; Physicians MICotACo. Clinical documentation in the 21st century: executive summary of a policy position paper from the American College of Physicians. Ann Intern Med. 2015;162(4):301-303. PubMed
11. Friedberg MW, Chen PG, Van Busum KR, et al. Factors Affecting Physician Professional Satisfaction and Their Implications for Patient Care, Health Systems, and Health Policy. Rand Health Q. 2014;3(4):1. PubMed
12. Smith SW, Koppel R. Healthcare information technology’s relativity problems: a typology of how patients’ physical reality, clinicians’ mental models, and healthcare information technology differ. J Am Med Inform Assoc. 2014; 21(1):117-131. PubMed
13. ACGME Program Requirements for Graduate Medical Education in Internal Medicine. http://www.acgme.org/Portals/0/PFAssets/ProgramRequirements/140_internal_medicine_2017-07-01.pdf. Revised July 1, 2017. Accessed July 22, 2017.
14. Thanarajasingam U, McDonald FS, Halvorsen AJ, et al. Service census caps and unit-based admissions: resident workload, conference attendance, duty hour compliance, and patient safety. Mayo Clin Proc. 2012;87(4):320-327. PubMed
15. Payne TH, Corley S, Cullen TA, et al. Report of the AMIA EHR-2020 Task Force on the status and future direction of EHRs. J Am Med Inform Assoc. 2015;22(5):1102-1110. PubMed

Issue
Journal of Hospital Medicine 13(8)
Issue
Journal of Hospital Medicine 13(8)
Page Number
570-572. Published online first March 26, 2018
Page Number
570-572. Published online first March 26, 2018
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Todd I. Smith, MD, FHM,Louis Stokes Cleveland Department of Veterans Affairs Medical Center, 10701 East Blvd 111(W), Cleveland, OH 44106; ; E-mail: Todd.Smith@va.gov
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Article PDF Media

Inpatient Trainee Clinical Exposures

Article Type
Changed
Sun, 05/21/2017 - 14:08
Display Headline
Clinical exposures during internal medicine acting internship: Profiling student and team experiences

The clinical learning model in medical education, specifically in the third and fourth years of medical school and in residency and fellowship training, is driven by direct patient‐care experiences and complemented by mentorship and supervision provided by experienced physicians.[1] Despite the emphasis on experiential learning in medical school and graduate training, the ability of educators to quantify the clinical experiences of learners has been limited. Case logs, often self‐reported, are frequently required during educational rotations to attempt to measure clinical experience.[2] Logs have been utilized to document diagnoses, demographics, disease severity, procedures, and chief complaints.[3, 4, 5, 6] Unfortunately, self‐reported logs are vulnerable to delayed updates, misreported data, and unreliable data validation.[7, 8] Automated data collection has been shown to be more reliable than self‐reported logs.[8, 9]

The enhanced data mining methods now available allow educators to appraise learners' exposures during patient‐care interactions beyond just the diagnosis or chief complaint (eg, how many electrocardiograms do our learners evaluate during a cardiology rotation, how often do our learners gain experience prescribing a specific class of antibiotics, how many of the patients seen by our learners are diabetic). For example, a learner's interaction with a patient during an inpatient admission for community‐acquired pneumonia, at minimum, would include assessing of past medical history, reviewing outpatient medications and allergies, evaluating tests completed (chest x‐ray, complete blood count, blood cultures), prescribing antibiotics, and monitoring comorbidities. The lack of knowledge regarding the frequency and context of these exposures is a key gap in our understanding of the clinical experience of inpatient trainees. Additionally, there are no data on clinical exposures specific to team‐based inpatient learning. When a rotation is team‐based, the educational experience is not limited to the learner's assigned patients, and this arrangement allows for educational exposures from patients who are not the learner's primary assignments through experiences gained during team rounds, cross‐coverage assessments, and informal discussions of patient care.

In this study, we quantify the clinical exposures of learners on an acting internship (AI) rotation in internal medicine by utilizing the Veterans Affairs (VA) electronic medical records (EMR) as collected through the VA Veterans Integrated Service Network 10 Clinical Data Warehouse (CDW). The AI or subinternship is a medical school clinical rotation typically completed in the fourth year, where the learning experience is expected to mirror a 1‐month rotation of a first‐year resident.[10] The AI has historically been defined as an experiential curriculum, during which students assume many of the responsibilities and activities that they will manage as graduate medical trainees.[10, 11] The exposures of AI learners include primary diagnoses encountered, problem lists evaluated at the time of admission, medications prescribed, laboratory tests ordered, and radiologic imaging evaluated. We additionally explored the exposures of the AI learner's team to assess the experiences available through team‐based care.

METHODS

This study was completed at the Louis Stokes Veterans Affairs Medical Center (LSVAMC) in Cleveland, Ohio, which is an academic affiliate of the Case Western Reserve University School of Medicine. The study was approved by the LSVAMC institutional review board.

At the LSVAMC, the AI rotation in internal medicine is a 4‐week inpatient rotation for fourth‐year medical students, in which the student is assigned to an inpatient medical team consisting of an attending physician, a senior resident, and a combination of first‐year residents and acting interns. Compared to a first‐year resident, the acting intern is assigned approximately half of the number of admissions. The teams rounds as a group at least once per day. Acting interns are permitted to place orders and write notes in the EMR; all orders require a cosignature by a resident or attending physician to be released.

We identified students who rotated through the LSVAMC for an AI in internal medicine rotation from July 2008 to November 2011 from rotation records. Using the CDW, we queried student names and their rotation dates and analyzed the results using a Structured Query Language Query Analyzer. Each student's patient encounters during the rotation were identified. A patient encounter was defined as a patient for whom the student wrote at least 1 note titled either Medicine Admission Note or Medicine Inpatient Progress Note, on any of the dates during their AI rotation. We then counted the total number of notes written by each student during their rotation. A patient identifier is associated with each note. The number of distinct patient identifiers was also tallied to establish the total number of patients seen during the rotation by the individual student as the primary caregiver.

We associated each patient encounter with an inpatient admission profile that included patient admission and discharge dates, International Classification of Diseases, 9th Revision (ICD‐9) diagnosis codes, and admitting specialty. Primary diagnosis codes were queried for each admission and were counted for individual students and in aggregate. We tallied both the individual student and aggregate patient medications prescribed during the dates of admission and ordered to a patient location consistent with an acute medical ward (therefore excluding orders placed if a patient was transferred to an intensive care unit). Similar queries were completed for laboratory and radiological testing.

The VA EMR keeps an active problem list on each patient, and items are associated with an ICD‐9 code. To assemble the active problems available for evaluation by the student on the day of a patient's admission, we queried all problem list items added prior to, but not discontinued before, the day of admission. We then tallied the results for every patient seen by each individual student and in aggregate.

To assess the team exposures for each AI student, we queried all discharge summaries cosigned by the student's attending during the dates of the student's rotation. We assumed the student's team members wrote these discharge summaries. After excluding the student's patients, the resultant list represented the team patient exposures for each student. This list was also queried for the number of patients seen, primary diagnoses, medications, problems, labs, and radiology. The number of team admissions counted included all patients who spent at least 1 day on the team while the student was rotating. All other team exposure counts completed included only patients who were both admitted and discharged within the dates of the student's rotation.

RESULTS

An AI rotation is 4 weeks in duration. Students competed a total of 128 rotations from July 30, 2008 through November 21, 2011. We included all rotations during this time period in the analysis. Tables 1, 2, 3, 4, 5 report results in 4 categories. The Student category tallies the total number of specific exposures (diagnoses, problems, medications, lab values, or radiology tests) for all patients primarily assigned to a student. The Team category tallies the total number of exposures for all patients assigned to other members of the student's inpatient team. The Primary % category identifies the percentage of students who had at least 1 assigned patient with the evaluated clinical exposure. The All Patients % category identifies the percentage of students who had at least 1 student‐assigned patient or at least 1 team‐assigned patient with the evaluated clinical exposure.

Most Common Primary Diagnoses
DiagnosisStudentTeamPrimary%All Patients %
Obstructive chronic bronchitis, with acute exacerbation10224157%91%
Pneumonia, organism unspecified9122849%91%
Acute renal failure, unspecified7317046%83%
Urinary tract infection, site not specified6914943%87%
Congestive heart failure, unspecified6511441%68%
Alcohol withdrawal4610126%61%
Alcoholic cirrhosis of liver289816%57%
Cellulitis and abscess of leg, except foot266118%45%
Acute pancreatitis235116%43%
Intestinal infection due to Clostridium difficile223017%33%
Malignant neoplasm of bronchus and lung, unspecified223816%35%
Acute on chronic diastolic heart failure224516%39%
Encounter for antineoplastic chemotherapy219615%48%
Dehydration197813%46%
Anemia, unspecified193613%30%
Pneumonitis due to inhalation of food or vomitus192513%24%
Syncope and collapse163813%39%
Other pulmonary embolism and infarction154112%26%
Unspecified pleural effusion153710%34%
Acute respiratory failure154211%35%
Most Common Problem List Items
ProblemStudentTeamPrimary%All Patients %
Hypertension1,6653,280100%100%
Tobacco use disorder1,3502,759100%100%
Unknown cause morbidity/mortality1,1542,370100%100%
Hyperlipidemia1,0362,04499%100%
Diabetes mellitus 2 without complication8651,709100%100%
Chronic airway obstruction6001,132100%100%
Esophageal reflux5831,13199%100%
Depressive disorder5101,005100%100%
Dermatophytosis of nail49893998%100%
Alcohol dependence44196697%100%
Chronic ischemic heart disease38575895%100%
Osteoarthritis38379196%100%
Lumbago35769297%100%
Current useanticoagulation34262994%100%
Anemia33767497%100%
Inhibited sex excitement31761091%100%
Congestive heart failure29455191%100%
Peripheral vascular disease28852988%99%
Sensorineural hearing loss28053588%99%
Post‐traumatic stress disorder27452891%100%
Pure hypercholesterolemia26252188%100%
Coronary atherosclerosis25939687%95%
Obesity24650989%99%
Atrial fibrillation23646985%100%
Gout21638985%100%
Most Common Medications Prescribed
MedicationStudentTeamPrimary%All Patients %
Omeprazole1,3722,98199%100%
Heparin1,0672,27195%96%
Sodium chloride 0.9%9252,03699%100%
Aspirin8441,78298%100%
Potassium chloride7071,38799%100%
Metoprolol tartrate6931,31898%100%
Insulin regular6921,51899%100%
Acetaminophen6691,35198%100%
Simvastatin6481,40899%100%
Lisinopril5821,30998%100%
Furosemide5771,18698%100%
Docusate sodium5411,12798%100%
Vancomycin53197798%100%
Multivitamin4781,07496%100%
Piperacillin/tazobactam47078198%100%
Selected examples    
Prednisone30561393%100%
Insulin glargine24449281%98%
Spironolactone16738073%98%
Digoxin6812540%77%
Meropenem162111%24%
Common Laboratory Tests (Proxy)
Lab TestStudentTeamPrimary%All Patients %
  • NOTE: Abbreviations:SGOT, serum glutamic oxaloacetic transaminase; WBC, white blood cell.

Fingerstick glucose12,86924,946100%100%
Renal panel (serum sodium)7,72814,504100%100%
Complete blood count (blood hematocrit)7,37214,188100%100%
International normalized ratio3,7256,259100%100%
Liver function tests (serum SGOT)1,5703,18099%100%
Urinalysis (urine nitrite)7891,537100%100%
Arterial blood gas (arterial blood pH)76770478%99%
Hemoglobin A1C4851,17796%100%
Fractional excretion of sodium (urine creatinine)33667785%99%
Lactic acid19531465%96%
Ferritin19341374%99%
Thyroid‐stimulating hormone18439155%64%
Lipase15731758%91%
Hepatitis C antibody13932770%98%
Haptoglobin10120846%83%
B‐type natriuretic peptide9821248%87%
Cortisol7011934%60%
Rapid plasma reagin7017344%82%
Urine legionella antigen7012638%64%
D‐dimer5911134%72%
Digoxin456918%39%
Paracentesis labs (peritoneal fluid total protein)344716%34%
Thoracentesis labs (pleural fluid WBC count)334220%38%
C‐reactive protein306517%34%
Lumbar puncture labs (cerebrospinal fluid WBC count)225711%27%
Arthrocentesis (synovial fluid WBC count)14239%23%
Most Common Radiology Tests
Radiology TestStudentTeamPrimary%All Patients %
  • NOTE: Abbreviations: CT, computed tomography; KUB, kidney, ureter, and bladder; MRI, magnetic resonance imaging; PA, posteroanterior; PE, pulmonary embolism;PET, positron‐emission tomography.

Chest,2 views,PA and lateral9381,955100%100%
Chest portable41475196%100%
CT head without contrast23549982%100%
CT abdomen with contrast21836559%71%
CT pelvis with contrast21336459%70%
CT chest with contrast16335175%99%
Ultrasound kidney, bilateral11920861%92%
Abdomen 1 view10722059%93%
Ultrasound liver10018348%82%
Modified barium swallow9313053%82%
PET scan9318149%79%
Selected examples    
Acute abdomen series8517748%81%
CT chest, PE protocol6712637%73%
MRI brain with andwithout contrast5610934%66%
Chest decubitus517634%60%
Portable KUBfor Dobhoff placement426230%48%
Ventilation/perfusion lung scan152512%27%
Ultrasound thyroid8165%17%

Distinct Patients and Progress Notes

The mean number of progress notes written by a student was 67.2 (standard deviation [SD] 16.3). The mean number of distinct patients evaluated by a student during a rotation was 18.4 (SD 4.2). The mean number of team admissions per student rotation was 46.7 (SD 9.6) distinct patients.

Primary Diagnoses

A total of 2213 primary diagnoses were documented on patients assigned to students on AI rotations. A total of 5323 primary diagnoses were documented on patients assigned to other members of the team during the students' rotations. Therefore, the mean number of primary diagnoses seen by a student during a rotation was 58.9 (17.3 primary diagnoses for student‐assigned patients and 41.6 primary diagnoses for team patients). The students and teams encountered similar diagnoses (Table 1).

Problem List

Students and teams evaluated a total of 40,015 and 78,643 past medical problems, respectively. The mean number of problems seen by a student during a rotation was 927 (313 student, 614 team). Table 2 reports the most frequent problems assigned to primary student admissions. Students and teams evaluated similar problems. Hepatitis C (196 student, 410 team) was the only team problem that was in the team top 25 but not in the student top 25.

Medications

A total of 38,149 medications were prescribed to the students' primary patients. A total of 77,738 medications were prescribed to patients assigned to the rest of the team. The mean number of medication exposures for a student during a rotation was 905 (298 student, 607 team). The most frequently prescribed medications were similar between student and the team (Table 3). Team medications that were in the top 25 but not in the student top 25 included: hydralazine (300 student, 629 team), prednisone (305 student, 613 team), and oxycodone/acetaminophen (286 student, 608 team).

Labs

All laboratory tests with reported results were tallied. For common laboratory panels, single lab values (eg, serum hematocrit for a complete blood count) were selected as proxies to count the number of studies completed and evaluated. Table 4 shows a cross‐section of laboratory tests evaluated during AI rotations.

Radiology

A total of 6197 radiology tests were completed on patients assigned to students, whereas 11,761 radiology tests were completed on patients assigned to other team members. The mean number of radiology exposures for a student was 140 (48 student, 92 team). The most frequently seen radiology tests were similar between student and the team (Table 5).

DISCUSSION

As medical educators, we assume that the clinical training years allow learners to develop essential skills through their varied clinical experiences. Through exposure to direct patient care, to medical decision‐making scenarios, and to senior physician management practices, trainees build the knowledge base for independent practice. To ensure there is sufficient clinical exposure, data on what trainees are encountering may prove beneficial.

In this novel study, we quantified what learners encounter during a 1‐month team‐based inpatient rotation at a large teaching hospital. We effectively measured a number of aspects of internal medicine inpatient training that have been difficult to quantify in the past. The ability to extract learner‐specific data is becoming increasingly available in academic teaching hospitals. For example, VA medical centers have available a daily updated national data warehouse. The other steps necessary for using learner‐specific data include an understanding of the local inpatient processhow tests are ordered, what note titles are used by traineesas well as someone able to build the queries necessary for data extraction. Once built, data extraction should be able to continue as an automated process and used in real time by medical educators.

Our method of data collection has limitations. The orders placed on a learner's primary patients may not have been placed by the learner. For example, orders may have been placed by an overnight resident cross‐covering the learner's patients. We assumed that learners evaluated the results of all tests (or medication changes) that occurred at any time during their rotation, including cross‐cover periods or days off. In addition, our method for evaluating team exposure underestimates the number of team patients calculated for each learner by limiting the query only to patients whose hospital stay was completed before the student left the inpatient service. It is also difficult to know the how many of the exposures are realized by the learner. Differences in learner attention, contrasts in rounding styles, and varying presentation methods will affect the number of exposures truly attained by the learner. Finally, not all clinical exposures can be evaluated through review of an EMR. Clinical experiences, such as care coordination, patient education, and family counseling, cannot be easily extracted.

Data mining EMRs can enhance clinical medical education. Although our data collection was completed retrospectively, we could easily provide learner‐specific data in real time to ward attendings, chief residents, and program directors. This information could direct the development of teaching tools and individualization of curricula. Perhaps, even more importantly, it would also allow educators to define curricular gaps. Whether these gaps are due to the particular patient demographics of a medical center, the practice patterns and strengths of a particular institution, or career interests of a trainee, these gaps may skew the patient‐care experiences encountered by individual trainees. We can use these data to identify differences in clinical experience and then develop opportunities for learnersclinical, didactic, or simulatedto address deficiencies and provide well‐rounded clinical experiences.

Further investigation to better understand the relationship between direct patient‐care experience and clinical skill acquisition is needed. This information could help guide the development of standards on the number of exposures we expect our learners to have with different diagnostic or treatment modalities prior to independent practice. Using learner data to better understand the clinical experiences of our medical trainees, we can hopefully develop more precise and focused curricula to ensure we produce competent graduates.

Acknowledgments

This material is the result of work supported with resources and the use of facilities at the Louis Stokes Cleveland VA Medical Center. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs.

Files
References
  1. Accreditation Council for Graduate Medical Education. Program requirements for graduate medical education in internal medicine. Available at: http://www.acgme.org/acgmeweb/Portals/0/PFAssets/2013-PR-FAQ-PIF/140_internal_medicine_07012013.pdf. Originally accessed December 18, 2012.
  2. Kasten SJ, Prince ME, Lypson ML. Residents make their lists and program directors check them twice: reviewing case logs. J Grad Med Educ. 2012;34:257260.
  3. Mattana J, Kerpen H, Lee C, et al. Quantifying internal medicine resident clinical experience using resident‐selected primary diagnosis codes. J Hosp Med. 2011;6(7):395400.
  4. Rattner SL, Louis DZ, Rabinowitz C, et al. Documenting and comparing medical students' clinical experiences. JAMA. 2001;286:10351040.
  5. Sequist TD, Singh S, Pereira AG, Rusinak D, Pearson SD. Use of an electronic medical record to profile the continuity clinic experiences of primary care residents. Acad Med. 2005;80:390394.
  6. Iglar K, Polsky J, Glazier R. Using a Web‐based system to monitor practice profiles in primary care residency training. Can Fam Physician. 2011;57:10301037.
  7. Nagler J, Harper MB, Bachur RG. An automated electronic case log: using electronic information systems to assess training in emergency medicine. Acad Emergency Med. 2006;13:733739.
  8. Simpao A, Heitz JW, McNulty SE, Chekemian B, Bren BR, Epstein RH. The design and implementation of an automated system for logging clinical experiences using an anesthesia information management system. Anesth Analg. 2011;112(2):422429.
  9. Nkoy FL, Petersen S, Matheny Antommaria AH, Maloney CG. Validation of an electronic system for recording medical student patient encounters. AMIA Annu Symp Proc. 2008;2008:510514.
  10. Sidlow R. The structure and content of the medical subinternship: a national survey. J Gen Intern Med. 2001;16:550553.
  11. Jolly BC, MacDonald MM. Education for practice: the role of practical experience in undergraduate and general clinical training. Med Educ. 1989;23:189195.
Article PDF
Issue
Journal of Hospital Medicine - 9(7)
Publications
Page Number
436-440
Sections
Files
Files
Article PDF
Article PDF

The clinical learning model in medical education, specifically in the third and fourth years of medical school and in residency and fellowship training, is driven by direct patient‐care experiences and complemented by mentorship and supervision provided by experienced physicians.[1] Despite the emphasis on experiential learning in medical school and graduate training, the ability of educators to quantify the clinical experiences of learners has been limited. Case logs, often self‐reported, are frequently required during educational rotations to attempt to measure clinical experience.[2] Logs have been utilized to document diagnoses, demographics, disease severity, procedures, and chief complaints.[3, 4, 5, 6] Unfortunately, self‐reported logs are vulnerable to delayed updates, misreported data, and unreliable data validation.[7, 8] Automated data collection has been shown to be more reliable than self‐reported logs.[8, 9]

The enhanced data mining methods now available allow educators to appraise learners' exposures during patient‐care interactions beyond just the diagnosis or chief complaint (eg, how many electrocardiograms do our learners evaluate during a cardiology rotation, how often do our learners gain experience prescribing a specific class of antibiotics, how many of the patients seen by our learners are diabetic). For example, a learner's interaction with a patient during an inpatient admission for community‐acquired pneumonia, at minimum, would include assessing of past medical history, reviewing outpatient medications and allergies, evaluating tests completed (chest x‐ray, complete blood count, blood cultures), prescribing antibiotics, and monitoring comorbidities. The lack of knowledge regarding the frequency and context of these exposures is a key gap in our understanding of the clinical experience of inpatient trainees. Additionally, there are no data on clinical exposures specific to team‐based inpatient learning. When a rotation is team‐based, the educational experience is not limited to the learner's assigned patients, and this arrangement allows for educational exposures from patients who are not the learner's primary assignments through experiences gained during team rounds, cross‐coverage assessments, and informal discussions of patient care.

In this study, we quantify the clinical exposures of learners on an acting internship (AI) rotation in internal medicine by utilizing the Veterans Affairs (VA) electronic medical records (EMR) as collected through the VA Veterans Integrated Service Network 10 Clinical Data Warehouse (CDW). The AI or subinternship is a medical school clinical rotation typically completed in the fourth year, where the learning experience is expected to mirror a 1‐month rotation of a first‐year resident.[10] The AI has historically been defined as an experiential curriculum, during which students assume many of the responsibilities and activities that they will manage as graduate medical trainees.[10, 11] The exposures of AI learners include primary diagnoses encountered, problem lists evaluated at the time of admission, medications prescribed, laboratory tests ordered, and radiologic imaging evaluated. We additionally explored the exposures of the AI learner's team to assess the experiences available through team‐based care.

METHODS

This study was completed at the Louis Stokes Veterans Affairs Medical Center (LSVAMC) in Cleveland, Ohio, which is an academic affiliate of the Case Western Reserve University School of Medicine. The study was approved by the LSVAMC institutional review board.

At the LSVAMC, the AI rotation in internal medicine is a 4‐week inpatient rotation for fourth‐year medical students, in which the student is assigned to an inpatient medical team consisting of an attending physician, a senior resident, and a combination of first‐year residents and acting interns. Compared to a first‐year resident, the acting intern is assigned approximately half of the number of admissions. The teams rounds as a group at least once per day. Acting interns are permitted to place orders and write notes in the EMR; all orders require a cosignature by a resident or attending physician to be released.

We identified students who rotated through the LSVAMC for an AI in internal medicine rotation from July 2008 to November 2011 from rotation records. Using the CDW, we queried student names and their rotation dates and analyzed the results using a Structured Query Language Query Analyzer. Each student's patient encounters during the rotation were identified. A patient encounter was defined as a patient for whom the student wrote at least 1 note titled either Medicine Admission Note or Medicine Inpatient Progress Note, on any of the dates during their AI rotation. We then counted the total number of notes written by each student during their rotation. A patient identifier is associated with each note. The number of distinct patient identifiers was also tallied to establish the total number of patients seen during the rotation by the individual student as the primary caregiver.

We associated each patient encounter with an inpatient admission profile that included patient admission and discharge dates, International Classification of Diseases, 9th Revision (ICD‐9) diagnosis codes, and admitting specialty. Primary diagnosis codes were queried for each admission and were counted for individual students and in aggregate. We tallied both the individual student and aggregate patient medications prescribed during the dates of admission and ordered to a patient location consistent with an acute medical ward (therefore excluding orders placed if a patient was transferred to an intensive care unit). Similar queries were completed for laboratory and radiological testing.

The VA EMR keeps an active problem list on each patient, and items are associated with an ICD‐9 code. To assemble the active problems available for evaluation by the student on the day of a patient's admission, we queried all problem list items added prior to, but not discontinued before, the day of admission. We then tallied the results for every patient seen by each individual student and in aggregate.

To assess the team exposures for each AI student, we queried all discharge summaries cosigned by the student's attending during the dates of the student's rotation. We assumed the student's team members wrote these discharge summaries. After excluding the student's patients, the resultant list represented the team patient exposures for each student. This list was also queried for the number of patients seen, primary diagnoses, medications, problems, labs, and radiology. The number of team admissions counted included all patients who spent at least 1 day on the team while the student was rotating. All other team exposure counts completed included only patients who were both admitted and discharged within the dates of the student's rotation.

RESULTS

An AI rotation is 4 weeks in duration. Students competed a total of 128 rotations from July 30, 2008 through November 21, 2011. We included all rotations during this time period in the analysis. Tables 1, 2, 3, 4, 5 report results in 4 categories. The Student category tallies the total number of specific exposures (diagnoses, problems, medications, lab values, or radiology tests) for all patients primarily assigned to a student. The Team category tallies the total number of exposures for all patients assigned to other members of the student's inpatient team. The Primary % category identifies the percentage of students who had at least 1 assigned patient with the evaluated clinical exposure. The All Patients % category identifies the percentage of students who had at least 1 student‐assigned patient or at least 1 team‐assigned patient with the evaluated clinical exposure.

Most Common Primary Diagnoses
DiagnosisStudentTeamPrimary%All Patients %
Obstructive chronic bronchitis, with acute exacerbation10224157%91%
Pneumonia, organism unspecified9122849%91%
Acute renal failure, unspecified7317046%83%
Urinary tract infection, site not specified6914943%87%
Congestive heart failure, unspecified6511441%68%
Alcohol withdrawal4610126%61%
Alcoholic cirrhosis of liver289816%57%
Cellulitis and abscess of leg, except foot266118%45%
Acute pancreatitis235116%43%
Intestinal infection due to Clostridium difficile223017%33%
Malignant neoplasm of bronchus and lung, unspecified223816%35%
Acute on chronic diastolic heart failure224516%39%
Encounter for antineoplastic chemotherapy219615%48%
Dehydration197813%46%
Anemia, unspecified193613%30%
Pneumonitis due to inhalation of food or vomitus192513%24%
Syncope and collapse163813%39%
Other pulmonary embolism and infarction154112%26%
Unspecified pleural effusion153710%34%
Acute respiratory failure154211%35%
Most Common Problem List Items
ProblemStudentTeamPrimary%All Patients %
Hypertension1,6653,280100%100%
Tobacco use disorder1,3502,759100%100%
Unknown cause morbidity/mortality1,1542,370100%100%
Hyperlipidemia1,0362,04499%100%
Diabetes mellitus 2 without complication8651,709100%100%
Chronic airway obstruction6001,132100%100%
Esophageal reflux5831,13199%100%
Depressive disorder5101,005100%100%
Dermatophytosis of nail49893998%100%
Alcohol dependence44196697%100%
Chronic ischemic heart disease38575895%100%
Osteoarthritis38379196%100%
Lumbago35769297%100%
Current useanticoagulation34262994%100%
Anemia33767497%100%
Inhibited sex excitement31761091%100%
Congestive heart failure29455191%100%
Peripheral vascular disease28852988%99%
Sensorineural hearing loss28053588%99%
Post‐traumatic stress disorder27452891%100%
Pure hypercholesterolemia26252188%100%
Coronary atherosclerosis25939687%95%
Obesity24650989%99%
Atrial fibrillation23646985%100%
Gout21638985%100%
Most Common Medications Prescribed
MedicationStudentTeamPrimary%All Patients %
Omeprazole1,3722,98199%100%
Heparin1,0672,27195%96%
Sodium chloride 0.9%9252,03699%100%
Aspirin8441,78298%100%
Potassium chloride7071,38799%100%
Metoprolol tartrate6931,31898%100%
Insulin regular6921,51899%100%
Acetaminophen6691,35198%100%
Simvastatin6481,40899%100%
Lisinopril5821,30998%100%
Furosemide5771,18698%100%
Docusate sodium5411,12798%100%
Vancomycin53197798%100%
Multivitamin4781,07496%100%
Piperacillin/tazobactam47078198%100%
Selected examples    
Prednisone30561393%100%
Insulin glargine24449281%98%
Spironolactone16738073%98%
Digoxin6812540%77%
Meropenem162111%24%
Common Laboratory Tests (Proxy)
Lab TestStudentTeamPrimary%All Patients %
  • NOTE: Abbreviations:SGOT, serum glutamic oxaloacetic transaminase; WBC, white blood cell.

Fingerstick glucose12,86924,946100%100%
Renal panel (serum sodium)7,72814,504100%100%
Complete blood count (blood hematocrit)7,37214,188100%100%
International normalized ratio3,7256,259100%100%
Liver function tests (serum SGOT)1,5703,18099%100%
Urinalysis (urine nitrite)7891,537100%100%
Arterial blood gas (arterial blood pH)76770478%99%
Hemoglobin A1C4851,17796%100%
Fractional excretion of sodium (urine creatinine)33667785%99%
Lactic acid19531465%96%
Ferritin19341374%99%
Thyroid‐stimulating hormone18439155%64%
Lipase15731758%91%
Hepatitis C antibody13932770%98%
Haptoglobin10120846%83%
B‐type natriuretic peptide9821248%87%
Cortisol7011934%60%
Rapid plasma reagin7017344%82%
Urine legionella antigen7012638%64%
D‐dimer5911134%72%
Digoxin456918%39%
Paracentesis labs (peritoneal fluid total protein)344716%34%
Thoracentesis labs (pleural fluid WBC count)334220%38%
C‐reactive protein306517%34%
Lumbar puncture labs (cerebrospinal fluid WBC count)225711%27%
Arthrocentesis (synovial fluid WBC count)14239%23%
Most Common Radiology Tests
Radiology TestStudentTeamPrimary%All Patients %
  • NOTE: Abbreviations: CT, computed tomography; KUB, kidney, ureter, and bladder; MRI, magnetic resonance imaging; PA, posteroanterior; PE, pulmonary embolism;PET, positron‐emission tomography.

Chest,2 views,PA and lateral9381,955100%100%
Chest portable41475196%100%
CT head without contrast23549982%100%
CT abdomen with contrast21836559%71%
CT pelvis with contrast21336459%70%
CT chest with contrast16335175%99%
Ultrasound kidney, bilateral11920861%92%
Abdomen 1 view10722059%93%
Ultrasound liver10018348%82%
Modified barium swallow9313053%82%
PET scan9318149%79%
Selected examples    
Acute abdomen series8517748%81%
CT chest, PE protocol6712637%73%
MRI brain with andwithout contrast5610934%66%
Chest decubitus517634%60%
Portable KUBfor Dobhoff placement426230%48%
Ventilation/perfusion lung scan152512%27%
Ultrasound thyroid8165%17%

Distinct Patients and Progress Notes

The mean number of progress notes written by a student was 67.2 (standard deviation [SD] 16.3). The mean number of distinct patients evaluated by a student during a rotation was 18.4 (SD 4.2). The mean number of team admissions per student rotation was 46.7 (SD 9.6) distinct patients.

Primary Diagnoses

A total of 2213 primary diagnoses were documented on patients assigned to students on AI rotations. A total of 5323 primary diagnoses were documented on patients assigned to other members of the team during the students' rotations. Therefore, the mean number of primary diagnoses seen by a student during a rotation was 58.9 (17.3 primary diagnoses for student‐assigned patients and 41.6 primary diagnoses for team patients). The students and teams encountered similar diagnoses (Table 1).

Problem List

Students and teams evaluated a total of 40,015 and 78,643 past medical problems, respectively. The mean number of problems seen by a student during a rotation was 927 (313 student, 614 team). Table 2 reports the most frequent problems assigned to primary student admissions. Students and teams evaluated similar problems. Hepatitis C (196 student, 410 team) was the only team problem that was in the team top 25 but not in the student top 25.

Medications

A total of 38,149 medications were prescribed to the students' primary patients. A total of 77,738 medications were prescribed to patients assigned to the rest of the team. The mean number of medication exposures for a student during a rotation was 905 (298 student, 607 team). The most frequently prescribed medications were similar between student and the team (Table 3). Team medications that were in the top 25 but not in the student top 25 included: hydralazine (300 student, 629 team), prednisone (305 student, 613 team), and oxycodone/acetaminophen (286 student, 608 team).

Labs

All laboratory tests with reported results were tallied. For common laboratory panels, single lab values (eg, serum hematocrit for a complete blood count) were selected as proxies to count the number of studies completed and evaluated. Table 4 shows a cross‐section of laboratory tests evaluated during AI rotations.

Radiology

A total of 6197 radiology tests were completed on patients assigned to students, whereas 11,761 radiology tests were completed on patients assigned to other team members. The mean number of radiology exposures for a student was 140 (48 student, 92 team). The most frequently seen radiology tests were similar between student and the team (Table 5).

DISCUSSION

As medical educators, we assume that the clinical training years allow learners to develop essential skills through their varied clinical experiences. Through exposure to direct patient care, to medical decision‐making scenarios, and to senior physician management practices, trainees build the knowledge base for independent practice. To ensure there is sufficient clinical exposure, data on what trainees are encountering may prove beneficial.

In this novel study, we quantified what learners encounter during a 1‐month team‐based inpatient rotation at a large teaching hospital. We effectively measured a number of aspects of internal medicine inpatient training that have been difficult to quantify in the past. The ability to extract learner‐specific data is becoming increasingly available in academic teaching hospitals. For example, VA medical centers have available a daily updated national data warehouse. The other steps necessary for using learner‐specific data include an understanding of the local inpatient processhow tests are ordered, what note titles are used by traineesas well as someone able to build the queries necessary for data extraction. Once built, data extraction should be able to continue as an automated process and used in real time by medical educators.

Our method of data collection has limitations. The orders placed on a learner's primary patients may not have been placed by the learner. For example, orders may have been placed by an overnight resident cross‐covering the learner's patients. We assumed that learners evaluated the results of all tests (or medication changes) that occurred at any time during their rotation, including cross‐cover periods or days off. In addition, our method for evaluating team exposure underestimates the number of team patients calculated for each learner by limiting the query only to patients whose hospital stay was completed before the student left the inpatient service. It is also difficult to know the how many of the exposures are realized by the learner. Differences in learner attention, contrasts in rounding styles, and varying presentation methods will affect the number of exposures truly attained by the learner. Finally, not all clinical exposures can be evaluated through review of an EMR. Clinical experiences, such as care coordination, patient education, and family counseling, cannot be easily extracted.

Data mining EMRs can enhance clinical medical education. Although our data collection was completed retrospectively, we could easily provide learner‐specific data in real time to ward attendings, chief residents, and program directors. This information could direct the development of teaching tools and individualization of curricula. Perhaps, even more importantly, it would also allow educators to define curricular gaps. Whether these gaps are due to the particular patient demographics of a medical center, the practice patterns and strengths of a particular institution, or career interests of a trainee, these gaps may skew the patient‐care experiences encountered by individual trainees. We can use these data to identify differences in clinical experience and then develop opportunities for learnersclinical, didactic, or simulatedto address deficiencies and provide well‐rounded clinical experiences.

Further investigation to better understand the relationship between direct patient‐care experience and clinical skill acquisition is needed. This information could help guide the development of standards on the number of exposures we expect our learners to have with different diagnostic or treatment modalities prior to independent practice. Using learner data to better understand the clinical experiences of our medical trainees, we can hopefully develop more precise and focused curricula to ensure we produce competent graduates.

Acknowledgments

This material is the result of work supported with resources and the use of facilities at the Louis Stokes Cleveland VA Medical Center. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs.

The clinical learning model in medical education, specifically in the third and fourth years of medical school and in residency and fellowship training, is driven by direct patient‐care experiences and complemented by mentorship and supervision provided by experienced physicians.[1] Despite the emphasis on experiential learning in medical school and graduate training, the ability of educators to quantify the clinical experiences of learners has been limited. Case logs, often self‐reported, are frequently required during educational rotations to attempt to measure clinical experience.[2] Logs have been utilized to document diagnoses, demographics, disease severity, procedures, and chief complaints.[3, 4, 5, 6] Unfortunately, self‐reported logs are vulnerable to delayed updates, misreported data, and unreliable data validation.[7, 8] Automated data collection has been shown to be more reliable than self‐reported logs.[8, 9]

The enhanced data mining methods now available allow educators to appraise learners' exposures during patient‐care interactions beyond just the diagnosis or chief complaint (eg, how many electrocardiograms do our learners evaluate during a cardiology rotation, how often do our learners gain experience prescribing a specific class of antibiotics, how many of the patients seen by our learners are diabetic). For example, a learner's interaction with a patient during an inpatient admission for community‐acquired pneumonia, at minimum, would include assessing of past medical history, reviewing outpatient medications and allergies, evaluating tests completed (chest x‐ray, complete blood count, blood cultures), prescribing antibiotics, and monitoring comorbidities. The lack of knowledge regarding the frequency and context of these exposures is a key gap in our understanding of the clinical experience of inpatient trainees. Additionally, there are no data on clinical exposures specific to team‐based inpatient learning. When a rotation is team‐based, the educational experience is not limited to the learner's assigned patients, and this arrangement allows for educational exposures from patients who are not the learner's primary assignments through experiences gained during team rounds, cross‐coverage assessments, and informal discussions of patient care.

In this study, we quantify the clinical exposures of learners on an acting internship (AI) rotation in internal medicine by utilizing the Veterans Affairs (VA) electronic medical records (EMR) as collected through the VA Veterans Integrated Service Network 10 Clinical Data Warehouse (CDW). The AI or subinternship is a medical school clinical rotation typically completed in the fourth year, where the learning experience is expected to mirror a 1‐month rotation of a first‐year resident.[10] The AI has historically been defined as an experiential curriculum, during which students assume many of the responsibilities and activities that they will manage as graduate medical trainees.[10, 11] The exposures of AI learners include primary diagnoses encountered, problem lists evaluated at the time of admission, medications prescribed, laboratory tests ordered, and radiologic imaging evaluated. We additionally explored the exposures of the AI learner's team to assess the experiences available through team‐based care.

METHODS

This study was completed at the Louis Stokes Veterans Affairs Medical Center (LSVAMC) in Cleveland, Ohio, which is an academic affiliate of the Case Western Reserve University School of Medicine. The study was approved by the LSVAMC institutional review board.

At the LSVAMC, the AI rotation in internal medicine is a 4‐week inpatient rotation for fourth‐year medical students, in which the student is assigned to an inpatient medical team consisting of an attending physician, a senior resident, and a combination of first‐year residents and acting interns. Compared to a first‐year resident, the acting intern is assigned approximately half of the number of admissions. The teams rounds as a group at least once per day. Acting interns are permitted to place orders and write notes in the EMR; all orders require a cosignature by a resident or attending physician to be released.

We identified students who rotated through the LSVAMC for an AI in internal medicine rotation from July 2008 to November 2011 from rotation records. Using the CDW, we queried student names and their rotation dates and analyzed the results using a Structured Query Language Query Analyzer. Each student's patient encounters during the rotation were identified. A patient encounter was defined as a patient for whom the student wrote at least 1 note titled either Medicine Admission Note or Medicine Inpatient Progress Note, on any of the dates during their AI rotation. We then counted the total number of notes written by each student during their rotation. A patient identifier is associated with each note. The number of distinct patient identifiers was also tallied to establish the total number of patients seen during the rotation by the individual student as the primary caregiver.

We associated each patient encounter with an inpatient admission profile that included patient admission and discharge dates, International Classification of Diseases, 9th Revision (ICD‐9) diagnosis codes, and admitting specialty. Primary diagnosis codes were queried for each admission and were counted for individual students and in aggregate. We tallied both the individual student and aggregate patient medications prescribed during the dates of admission and ordered to a patient location consistent with an acute medical ward (therefore excluding orders placed if a patient was transferred to an intensive care unit). Similar queries were completed for laboratory and radiological testing.

The VA EMR keeps an active problem list on each patient, and items are associated with an ICD‐9 code. To assemble the active problems available for evaluation by the student on the day of a patient's admission, we queried all problem list items added prior to, but not discontinued before, the day of admission. We then tallied the results for every patient seen by each individual student and in aggregate.

To assess the team exposures for each AI student, we queried all discharge summaries cosigned by the student's attending during the dates of the student's rotation. We assumed the student's team members wrote these discharge summaries. After excluding the student's patients, the resultant list represented the team patient exposures for each student. This list was also queried for the number of patients seen, primary diagnoses, medications, problems, labs, and radiology. The number of team admissions counted included all patients who spent at least 1 day on the team while the student was rotating. All other team exposure counts completed included only patients who were both admitted and discharged within the dates of the student's rotation.

RESULTS

An AI rotation is 4 weeks in duration. Students competed a total of 128 rotations from July 30, 2008 through November 21, 2011. We included all rotations during this time period in the analysis. Tables 1, 2, 3, 4, 5 report results in 4 categories. The Student category tallies the total number of specific exposures (diagnoses, problems, medications, lab values, or radiology tests) for all patients primarily assigned to a student. The Team category tallies the total number of exposures for all patients assigned to other members of the student's inpatient team. The Primary % category identifies the percentage of students who had at least 1 assigned patient with the evaluated clinical exposure. The All Patients % category identifies the percentage of students who had at least 1 student‐assigned patient or at least 1 team‐assigned patient with the evaluated clinical exposure.

Most Common Primary Diagnoses
DiagnosisStudentTeamPrimary%All Patients %
Obstructive chronic bronchitis, with acute exacerbation10224157%91%
Pneumonia, organism unspecified9122849%91%
Acute renal failure, unspecified7317046%83%
Urinary tract infection, site not specified6914943%87%
Congestive heart failure, unspecified6511441%68%
Alcohol withdrawal4610126%61%
Alcoholic cirrhosis of liver289816%57%
Cellulitis and abscess of leg, except foot266118%45%
Acute pancreatitis235116%43%
Intestinal infection due to Clostridium difficile223017%33%
Malignant neoplasm of bronchus and lung, unspecified223816%35%
Acute on chronic diastolic heart failure224516%39%
Encounter for antineoplastic chemotherapy219615%48%
Dehydration197813%46%
Anemia, unspecified193613%30%
Pneumonitis due to inhalation of food or vomitus192513%24%
Syncope and collapse163813%39%
Other pulmonary embolism and infarction154112%26%
Unspecified pleural effusion153710%34%
Acute respiratory failure154211%35%
Most Common Problem List Items
ProblemStudentTeamPrimary%All Patients %
Hypertension1,6653,280100%100%
Tobacco use disorder1,3502,759100%100%
Unknown cause morbidity/mortality1,1542,370100%100%
Hyperlipidemia1,0362,04499%100%
Diabetes mellitus 2 without complication8651,709100%100%
Chronic airway obstruction6001,132100%100%
Esophageal reflux5831,13199%100%
Depressive disorder5101,005100%100%
Dermatophytosis of nail49893998%100%
Alcohol dependence44196697%100%
Chronic ischemic heart disease38575895%100%
Osteoarthritis38379196%100%
Lumbago35769297%100%
Current useanticoagulation34262994%100%
Anemia33767497%100%
Inhibited sex excitement31761091%100%
Congestive heart failure29455191%100%
Peripheral vascular disease28852988%99%
Sensorineural hearing loss28053588%99%
Post‐traumatic stress disorder27452891%100%
Pure hypercholesterolemia26252188%100%
Coronary atherosclerosis25939687%95%
Obesity24650989%99%
Atrial fibrillation23646985%100%
Gout21638985%100%
Most Common Medications Prescribed
MedicationStudentTeamPrimary%All Patients %
Omeprazole1,3722,98199%100%
Heparin1,0672,27195%96%
Sodium chloride 0.9%9252,03699%100%
Aspirin8441,78298%100%
Potassium chloride7071,38799%100%
Metoprolol tartrate6931,31898%100%
Insulin regular6921,51899%100%
Acetaminophen6691,35198%100%
Simvastatin6481,40899%100%
Lisinopril5821,30998%100%
Furosemide5771,18698%100%
Docusate sodium5411,12798%100%
Vancomycin53197798%100%
Multivitamin4781,07496%100%
Piperacillin/tazobactam47078198%100%
Selected examples    
Prednisone30561393%100%
Insulin glargine24449281%98%
Spironolactone16738073%98%
Digoxin6812540%77%
Meropenem162111%24%
Common Laboratory Tests (Proxy)
Lab TestStudentTeamPrimary%All Patients %
  • NOTE: Abbreviations:SGOT, serum glutamic oxaloacetic transaminase; WBC, white blood cell.

Fingerstick glucose12,86924,946100%100%
Renal panel (serum sodium)7,72814,504100%100%
Complete blood count (blood hematocrit)7,37214,188100%100%
International normalized ratio3,7256,259100%100%
Liver function tests (serum SGOT)1,5703,18099%100%
Urinalysis (urine nitrite)7891,537100%100%
Arterial blood gas (arterial blood pH)76770478%99%
Hemoglobin A1C4851,17796%100%
Fractional excretion of sodium (urine creatinine)33667785%99%
Lactic acid19531465%96%
Ferritin19341374%99%
Thyroid‐stimulating hormone18439155%64%
Lipase15731758%91%
Hepatitis C antibody13932770%98%
Haptoglobin10120846%83%
B‐type natriuretic peptide9821248%87%
Cortisol7011934%60%
Rapid plasma reagin7017344%82%
Urine legionella antigen7012638%64%
D‐dimer5911134%72%
Digoxin456918%39%
Paracentesis labs (peritoneal fluid total protein)344716%34%
Thoracentesis labs (pleural fluid WBC count)334220%38%
C‐reactive protein306517%34%
Lumbar puncture labs (cerebrospinal fluid WBC count)225711%27%
Arthrocentesis (synovial fluid WBC count)14239%23%
Most Common Radiology Tests
Radiology TestStudentTeamPrimary%All Patients %
  • NOTE: Abbreviations: CT, computed tomography; KUB, kidney, ureter, and bladder; MRI, magnetic resonance imaging; PA, posteroanterior; PE, pulmonary embolism;PET, positron‐emission tomography.

Chest,2 views,PA and lateral9381,955100%100%
Chest portable41475196%100%
CT head without contrast23549982%100%
CT abdomen with contrast21836559%71%
CT pelvis with contrast21336459%70%
CT chest with contrast16335175%99%
Ultrasound kidney, bilateral11920861%92%
Abdomen 1 view10722059%93%
Ultrasound liver10018348%82%
Modified barium swallow9313053%82%
PET scan9318149%79%
Selected examples    
Acute abdomen series8517748%81%
CT chest, PE protocol6712637%73%
MRI brain with andwithout contrast5610934%66%
Chest decubitus517634%60%
Portable KUBfor Dobhoff placement426230%48%
Ventilation/perfusion lung scan152512%27%
Ultrasound thyroid8165%17%

Distinct Patients and Progress Notes

The mean number of progress notes written by a student was 67.2 (standard deviation [SD] 16.3). The mean number of distinct patients evaluated by a student during a rotation was 18.4 (SD 4.2). The mean number of team admissions per student rotation was 46.7 (SD 9.6) distinct patients.

Primary Diagnoses

A total of 2213 primary diagnoses were documented on patients assigned to students on AI rotations. A total of 5323 primary diagnoses were documented on patients assigned to other members of the team during the students' rotations. Therefore, the mean number of primary diagnoses seen by a student during a rotation was 58.9 (17.3 primary diagnoses for student‐assigned patients and 41.6 primary diagnoses for team patients). The students and teams encountered similar diagnoses (Table 1).

Problem List

Students and teams evaluated a total of 40,015 and 78,643 past medical problems, respectively. The mean number of problems seen by a student during a rotation was 927 (313 student, 614 team). Table 2 reports the most frequent problems assigned to primary student admissions. Students and teams evaluated similar problems. Hepatitis C (196 student, 410 team) was the only team problem that was in the team top 25 but not in the student top 25.

Medications

A total of 38,149 medications were prescribed to the students' primary patients. A total of 77,738 medications were prescribed to patients assigned to the rest of the team. The mean number of medication exposures for a student during a rotation was 905 (298 student, 607 team). The most frequently prescribed medications were similar between student and the team (Table 3). Team medications that were in the top 25 but not in the student top 25 included: hydralazine (300 student, 629 team), prednisone (305 student, 613 team), and oxycodone/acetaminophen (286 student, 608 team).

Labs

All laboratory tests with reported results were tallied. For common laboratory panels, single lab values (eg, serum hematocrit for a complete blood count) were selected as proxies to count the number of studies completed and evaluated. Table 4 shows a cross‐section of laboratory tests evaluated during AI rotations.

Radiology

A total of 6197 radiology tests were completed on patients assigned to students, whereas 11,761 radiology tests were completed on patients assigned to other team members. The mean number of radiology exposures for a student was 140 (48 student, 92 team). The most frequently seen radiology tests were similar between student and the team (Table 5).

DISCUSSION

As medical educators, we assume that the clinical training years allow learners to develop essential skills through their varied clinical experiences. Through exposure to direct patient care, to medical decision‐making scenarios, and to senior physician management practices, trainees build the knowledge base for independent practice. To ensure there is sufficient clinical exposure, data on what trainees are encountering may prove beneficial.

In this novel study, we quantified what learners encounter during a 1‐month team‐based inpatient rotation at a large teaching hospital. We effectively measured a number of aspects of internal medicine inpatient training that have been difficult to quantify in the past. The ability to extract learner‐specific data is becoming increasingly available in academic teaching hospitals. For example, VA medical centers have available a daily updated national data warehouse. The other steps necessary for using learner‐specific data include an understanding of the local inpatient processhow tests are ordered, what note titles are used by traineesas well as someone able to build the queries necessary for data extraction. Once built, data extraction should be able to continue as an automated process and used in real time by medical educators.

Our method of data collection has limitations. The orders placed on a learner's primary patients may not have been placed by the learner. For example, orders may have been placed by an overnight resident cross‐covering the learner's patients. We assumed that learners evaluated the results of all tests (or medication changes) that occurred at any time during their rotation, including cross‐cover periods or days off. In addition, our method for evaluating team exposure underestimates the number of team patients calculated for each learner by limiting the query only to patients whose hospital stay was completed before the student left the inpatient service. It is also difficult to know the how many of the exposures are realized by the learner. Differences in learner attention, contrasts in rounding styles, and varying presentation methods will affect the number of exposures truly attained by the learner. Finally, not all clinical exposures can be evaluated through review of an EMR. Clinical experiences, such as care coordination, patient education, and family counseling, cannot be easily extracted.

Data mining EMRs can enhance clinical medical education. Although our data collection was completed retrospectively, we could easily provide learner‐specific data in real time to ward attendings, chief residents, and program directors. This information could direct the development of teaching tools and individualization of curricula. Perhaps, even more importantly, it would also allow educators to define curricular gaps. Whether these gaps are due to the particular patient demographics of a medical center, the practice patterns and strengths of a particular institution, or career interests of a trainee, these gaps may skew the patient‐care experiences encountered by individual trainees. We can use these data to identify differences in clinical experience and then develop opportunities for learnersclinical, didactic, or simulatedto address deficiencies and provide well‐rounded clinical experiences.

Further investigation to better understand the relationship between direct patient‐care experience and clinical skill acquisition is needed. This information could help guide the development of standards on the number of exposures we expect our learners to have with different diagnostic or treatment modalities prior to independent practice. Using learner data to better understand the clinical experiences of our medical trainees, we can hopefully develop more precise and focused curricula to ensure we produce competent graduates.

Acknowledgments

This material is the result of work supported with resources and the use of facilities at the Louis Stokes Cleveland VA Medical Center. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs.

References
  1. Accreditation Council for Graduate Medical Education. Program requirements for graduate medical education in internal medicine. Available at: http://www.acgme.org/acgmeweb/Portals/0/PFAssets/2013-PR-FAQ-PIF/140_internal_medicine_07012013.pdf. Originally accessed December 18, 2012.
  2. Kasten SJ, Prince ME, Lypson ML. Residents make their lists and program directors check them twice: reviewing case logs. J Grad Med Educ. 2012;34:257260.
  3. Mattana J, Kerpen H, Lee C, et al. Quantifying internal medicine resident clinical experience using resident‐selected primary diagnosis codes. J Hosp Med. 2011;6(7):395400.
  4. Rattner SL, Louis DZ, Rabinowitz C, et al. Documenting and comparing medical students' clinical experiences. JAMA. 2001;286:10351040.
  5. Sequist TD, Singh S, Pereira AG, Rusinak D, Pearson SD. Use of an electronic medical record to profile the continuity clinic experiences of primary care residents. Acad Med. 2005;80:390394.
  6. Iglar K, Polsky J, Glazier R. Using a Web‐based system to monitor practice profiles in primary care residency training. Can Fam Physician. 2011;57:10301037.
  7. Nagler J, Harper MB, Bachur RG. An automated electronic case log: using electronic information systems to assess training in emergency medicine. Acad Emergency Med. 2006;13:733739.
  8. Simpao A, Heitz JW, McNulty SE, Chekemian B, Bren BR, Epstein RH. The design and implementation of an automated system for logging clinical experiences using an anesthesia information management system. Anesth Analg. 2011;112(2):422429.
  9. Nkoy FL, Petersen S, Matheny Antommaria AH, Maloney CG. Validation of an electronic system for recording medical student patient encounters. AMIA Annu Symp Proc. 2008;2008:510514.
  10. Sidlow R. The structure and content of the medical subinternship: a national survey. J Gen Intern Med. 2001;16:550553.
  11. Jolly BC, MacDonald MM. Education for practice: the role of practical experience in undergraduate and general clinical training. Med Educ. 1989;23:189195.
References
  1. Accreditation Council for Graduate Medical Education. Program requirements for graduate medical education in internal medicine. Available at: http://www.acgme.org/acgmeweb/Portals/0/PFAssets/2013-PR-FAQ-PIF/140_internal_medicine_07012013.pdf. Originally accessed December 18, 2012.
  2. Kasten SJ, Prince ME, Lypson ML. Residents make their lists and program directors check them twice: reviewing case logs. J Grad Med Educ. 2012;34:257260.
  3. Mattana J, Kerpen H, Lee C, et al. Quantifying internal medicine resident clinical experience using resident‐selected primary diagnosis codes. J Hosp Med. 2011;6(7):395400.
  4. Rattner SL, Louis DZ, Rabinowitz C, et al. Documenting and comparing medical students' clinical experiences. JAMA. 2001;286:10351040.
  5. Sequist TD, Singh S, Pereira AG, Rusinak D, Pearson SD. Use of an electronic medical record to profile the continuity clinic experiences of primary care residents. Acad Med. 2005;80:390394.
  6. Iglar K, Polsky J, Glazier R. Using a Web‐based system to monitor practice profiles in primary care residency training. Can Fam Physician. 2011;57:10301037.
  7. Nagler J, Harper MB, Bachur RG. An automated electronic case log: using electronic information systems to assess training in emergency medicine. Acad Emergency Med. 2006;13:733739.
  8. Simpao A, Heitz JW, McNulty SE, Chekemian B, Bren BR, Epstein RH. The design and implementation of an automated system for logging clinical experiences using an anesthesia information management system. Anesth Analg. 2011;112(2):422429.
  9. Nkoy FL, Petersen S, Matheny Antommaria AH, Maloney CG. Validation of an electronic system for recording medical student patient encounters. AMIA Annu Symp Proc. 2008;2008:510514.
  10. Sidlow R. The structure and content of the medical subinternship: a national survey. J Gen Intern Med. 2001;16:550553.
  11. Jolly BC, MacDonald MM. Education for practice: the role of practical experience in undergraduate and general clinical training. Med Educ. 1989;23:189195.
Issue
Journal of Hospital Medicine - 9(7)
Issue
Journal of Hospital Medicine - 9(7)
Page Number
436-440
Page Number
436-440
Publications
Publications
Article Type
Display Headline
Clinical exposures during internal medicine acting internship: Profiling student and team experiences
Display Headline
Clinical exposures during internal medicine acting internship: Profiling student and team experiences
Sections
Article Source

© 2014 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Todd I. Smith, MD, 10701 East Blvd 111(W), Cleveland, Ohio 44106; E‐mail: todd.smith@va.gov
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files