Affiliations
Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, Minnesota
Given name(s)
Thomas J.
Family name
Beckman
Degrees
MD

Attending Workload, Teaching, and Safety

Article Type
Changed
Mon, 05/15/2017 - 22:39
Display Headline
Associations between attending physician workload, teaching effectiveness, and patient safety

Teaching attending physicians must balance clinical workload and resident education simultaneously while supervising inpatient services. The workload of teaching attendings has been increasing due to many factors. As patient complexity has increased, length of stay has decreased, creating higher turnover and higher acuity of hospitalized patients.[1, 2, 3, 4, 5] The rising burden of clinical documentation has increased demands on inpatient attending physicians' time.[6] Additionally, resident duty hour restrictions have shifted the responsibility for patient care to the teaching attending.[7] These factors contribute to the perception of unsafe workloads among attending physicians[8] and could impact the ability to teach well.

Teaching effectiveness is an important facet of the graduate medical education (GME) learning environment.[9] Residents perceive that education suffers when their own workload increases,[10, 11, 12, 13, 14] and higher on‐call workload is associated with lower likelihood of participation in educational activities.[15] More contact between resident trainees and supervisory staff may improve the clinical value of inpatient rotations.[16] Program directors have expressed concern about the educational ramifications of work compression.[17, 18, 19, 20] Higher workload for attending physicians can negatively impact patient safety and quality of care,[21, 22] and perception of higher attending workload is associated with less time for teaching.[23] However, the impact of objective measures of attending physician workload on educational outcomes has not been explored. When attending physicians are responsible for increasingly complex clinical care in addition to resident education, teaching effectiveness may suffer. With growing emphasis on the educational environment's effect on healthcare quality and safety,[24] it is imperative to consider the influence of attending workload on patient care and resident education.

The combination of increasing clinical demands, fewer hours in‐house for residents, and less time for teaching has the potential to decrease attending physician teaching effectiveness. In this study, we aimed to evaluate relationships among objective measures of attending physician workload, resident perception of teaching effectiveness, and patient outcomes. We hypothesized that higher workload for attending physicians would be associated with lower ratings of teaching effectiveness and poorer outcomes for patients.

METHODS

We performed a retrospective study of attending physicians who supervised inpatient internal medicine teaching services at Mayo ClinicRochester from July 2005 through June 2011 (6 full academic years). The team structure for each service was 1 attending physician, 1 senior resident, and 3 interns. Senior residents were on call every fourth night, and interns were on call every sixth night. Up to 2 admissions per service were received during the daytime short call, and up to 5 admissions per service were received during the overnight long call. Attending physicians included all supervising physicians in appointment categories of attending/consultant, senior associate consultant, and chief medical resident at the Mayo Clinic. Maximum continuous on‐call time for residents during the study period was restricted to 30 hours continuously. The timeframe of this study was chosen to minimize variability in resident work schedules; effective July 1, 2011, duty hours for postgraduate year 1 residents were further restricted to a maximum of 16 hours in duration.[25]

Measures of Attending Physician Workload

To measure attending physician workload, we examined mean service census as reported at midnight, mean patient length of stay, mean number of daily admissions, and mean number of daily discharges. We also calculated mean daily outpatient relative value units (RVUs) generated as a measure of outpatient workload while the attending was supervising the inpatient service. Similar measures of workload have been used in previous research.[26] Attending physicians in this study functioned as hospitalists during their time supervising the teaching services; that is, they were not routinely assigned to any outpatient responsibilities. The only way for an outpatient RVU to be generated during their time supervising the hospital service was for the attending physician to specifically request to see an outpatient in the clinic. Attending physicians only supervised 1 teaching service at a time and had no concurrent nonteaching service obligations. Admissions were received on a rotating basis. Because patient illness severity may impact workload, we also examined mean expected mortality (per 1000 patients) for all patients on the attending physicians' hospital services.[27]

The above workload variables were measured in the specific timeframe that corresponded to the number of days an attending physician was supervising a particular team; for example, mean census was the mean number of patients on the attending physician's hospital service during his or her time supervising that resident team.

Teaching Effectiveness Outcome Measures

Teaching effectiveness was measured using residents' evaluations of their attending physicians with a 5‐point scale (1 = needs improvement, 3 = average, 5 = top 10% of attending physicians) that has been previously validated in similar contexts.[28, 29, 30, 31, 32] The evaluation questions are shown in Supporting Information, Appendix A, in the online version of this article.

Patient Outcome Measures

Patient outcomes included applicable patient safety indicators (PSIs) as defined by the Agency for Healthcare Research and Quality[33] (see Supporting Information, Appendix B, in the online version of this article), patient transfers to the intensive care unit (ICU), calls to the rapid response team/cardiopulmonary resuscitation team, and patient deaths. Each indicator and event was summarized as occurred or did not occur at the service‐team level. For example, for a particular attendingresident team, the occurrence of each of these events at any point during the time they worked together was recorded as occurred (1) or did not occur (0). Similar measures of patient outcomes have been used in previous research.[32]

Statistical Analysis

Mixed linear models with variance components covariance structure (including random effects to account for repeated ratings by residents and of faculty) were fit using restricted maximum likelihood to examine associations of attending workload and demographics with teaching scores. Generalized linear regression models, estimated via generalized estimating equations, were used to examine associations of attending workload and demographics with patient outcomes. Due to the binary nature of the outcomes, the binomial distribution and logit link function were used, producing odds ratios (ORs) for covariates akin to those found in standard logistic regression. Multivariate models were used to adjust for physician demographics including age, gender, teaching appointment (consultant, senior associate consultant/temporary clinical appointment, or chief medical resident) and academic rank (professor, associate professor, assistant professor, instructor/none).

To account for multiple comparisons, a significance level of P < 0.01 was used. All analyses were performed using SAS statistical software (version 9.3; SAS Institute Inc., Cary, NC). This study was deemed minimal risk after review by the Mayo Clinic Institutional Review Board.

RESULTS

Over the 6‐year study period, 107 attending physicians supervised internal medicine teaching services. Twenty‐three percent of teaching attending physicians were female. Mean attending age was 42.6 years. Attendings supervised a given service for between 2 and 19 days (mean [standard deviation] = 10.1 [4.1] days). There were 542 internal medicine residents on these teaching services who completed at least 1 teaching evaluation. A total of 69,386 teaching evaluation items were submitted by these residents during the study period.

In a multivariate analysis adjusted for faculty demographics and workload measures, teaching evaluation scores were significantly higher for attending physicians who had an academic rank of professor when compared to attendings who were assistant professors ( = 0.12, P = 0.007), or instructors/no academic rank ( = 0.23, P < 0.0001). The number of days an attending physician spent with the team showed a positive association with teaching evaluations ( = +0.015, P < 0.0001).

Associations between measures of attending physician workload and teaching evaluation scores are shown in Table 1. Mean midnight census and mean number of daily discharges were associated with lower teaching evaluation scores (both = 0.026, P < 0.0001). Mean number of daily admissions was associated with higher teaching scores ( = +0.021, P = 0.001). The mean expected mortality among hospitalized patients on the services supervised by teaching attendings and the outpatient RVUs generated by these attendings during the time they were supervising the hospital service showed no association with teaching scores. The average number of RVUs generated during an attending's entire time supervising hospital service was <1.

Associations Between Attending Physician Workload and Teaching Evaluation Scores
Attending Physician Workload MeasureMean (SD)Multivariate Analysis*
 SE99% CIP
  • NOTE: Abbreviations: CI, confidence interval; SD, standard deviation; SE, standard error. *Using 69,386 teaching evaluation items submitted by 542 internal medicine residents for 107 attending physicians during the study period. Multivariate model was adjusted for gender, teaching appointment, academic rank, age, and number of days attending physician spent with the team.

Midnight census8.86 (1.8)0.0260.002(0.03, 0.02)<0.0001
Length of stay, d6.91 (3.0)+0.0060.001(0.002, 0.009)<0.0001
Expected mortality (per 1,000 patients)51.94 (27.4)0.00010.0001(0.0004, 0.0001)0.19
Daily admissions2.23 (0.54)+0.0210.006(0.004, 0.037)0.001
Daily discharges2.13 (0.56)0.0260.006(0.041, 0.010)<0.0001
Daily outpatient relative value units0.69 (1.2)+0.0040.003(0.002, 0.011)0.10

Table 2 shows relationships between attending physician workload and patient outcomes for the patients on hospital services supervised by 107 attending physicians during the study period. Patient outcome data showed positive associations between measures of higher workload and PSIs. Specifically, for each 1‐patient increase in the average number of daily admissions to the attending and resident services, the cohort of patients under the team's care was 1.8 times more likely to include at least 1 patient with a PSI event (OR = 1.81, 99% confidence interval [CI]: 1.21, 2.71, P = 0.0001). Likewise, for each 1‐day increase in average length of stay, the cohort of patients under the team's care was 1.16 times more likely to have at least 1 patient with a PSI (OR = 1.16, 99% CI: 1.07, 1.26, P < 0.0001). As anticipated, mean expected mortality was associated with actual mortality, cardiopulmonary resuscitation/rapid response team calls, and ICU transfers. There were no associations between patient outcomes and workload measures of midnight census and outpatient RVUs.

Associations Between Attending Physician Workload and Patient Outcomes
 Patient Outcomes, Multivariate Analysis*
Patient Safety Indicators, n = 513Deaths, n = 352CPR/RRT Calls, n = 409ICU Transfers, n = 737
Workload measuresORSEP99% CIORSEP99% CIORSEP99% CIORSEP99% CI
  • NOTE: Abbreviations: CI, confidence interval; CPR, cardiopulmonary resuscitation; ICU, intensive care unit; OR, odds ratio; RRT, rapid response team; SE, standard error. *Multivariate model was adjusted for gender, teaching appointment, academic rank, age, and number of days the attending physician spent with the team.

Midnight census1.100.050.04(0.98, 1.24)0.910.040.03(0.81, 1.02)0.950.040.16(0.86, 1.05)1.060.040.16(0.96, 1.17)
Length of stay1.160.04<0.0001(1.07, 1.26)1.030.030.39(0.95, 1.12)0.990.030.63(0.92, 1.05)1.100.030.0001(1.03, 1.18)
Expected mortality (per 1,000 patients)1.000.0030.24(0.99, 1.01)1.010.000.002(1.00, 1.02)1.020.00<0.0001(1.01, 1.02)1.010.000.003(1.00, 1.01)
Daily admissions1.810.280.0001(1.21, 2.71)0.780.140.16(0.49, 1.24)1.110.200.57(0.69, 1.77)1.340.240.09(0.85, 2.11)
Daily discharges1.060.130.61(0.78, 1.45)2.360.38<0.0001(1.56, 3.57)0.940.160.70(0.60, 1.46)1.090.160.53(0.75, 1.60)
Daily outpatient relative value units0.810.070.01(0.65, 1.00)1.020.040.56(0.92, 1.13)1.050.040.23(0.95, 1.17)0.920.060.23(0.77, 1.09)

DISCUSSION

This study of internal medicine attending physician workload and resident education demonstrates that higher workload among attending physicians is associated with slightly lower teaching evaluation scores from residents as well as increased risks to patient safety.

The prior literature examining relationships between workload and teaching effectiveness is largely survey‐based and reliant upon physicians' self‐reported perceptions of workload.[10, 13, 23] The present study strengthens this evidence by using multiple objective measures of workload, objective measures of patient safety, and a large sample of teaching evaluations.

An interesting finding in this study was that the number of patient dismissals per day was associated with a significant decrease in teaching scores, whereas the number of admissions per day was associated with increased teaching scores. These findings may seem contradictory, because the number of admissions and discharges both measure physician workload. However, a likely explanation for this apparent inconsistency is that on internal medicine inpatient teaching services, much of the teaching of residents occurs at the time of a patient admission as residents are presenting cases to the attending physician, exploring differential diagnoses, and discussing management plans. By contrast, a patient dismissal tends to consist mainly of patient interaction, paperwork, and phone calls by the resident with less input required from the attending physician. Our findings suggest that although patient admissions remain a rich opportunity for resident education, patient dismissals may increase workload without improving teaching evaluations. As the inpatient hospital environment evolves, exploring options for nonphysician providers to assist with or complete patient dismissals may have a beneficial effect on resident education.[34] In addition, exploring more efficient teaching strategies may be beneficial in the fast‐paced inpatient learning milieu.[35]

There was a statistically significant positive association between the number of days an attending physician spent with the team and teaching evaluations. Although prior work has examined advantages and disadvantages of various resident schedules,[36, 37, 38] our results suggest scheduling models that emphasize continuity of the teaching attending and residents may be preferred to enhance teaching effectiveness. Further study would help elucidate potential implications of this finding for the scheduling of supervisory attendings to optimize education.

In this analysis, patient outcome measures were largely independent of attending physician workload, with the exception of PSIs. PSIs have been associated with longer stays in the hospital,[39, 40] which is consistent with our findings. However, mean daily admissions were also associated with PSIs. It could be expected that the more patients on a hospital service, the more PSIs will result. However, there was not a significant association between midnight census and PSIs when other variables were accounted for. Because new patient admissions are time consuming and contribute to the workload of both residents and attending physicians, it is possible that safety of the service's hospitalized patients is compromised when the team is putting time and effort toward new patients. Previous research has shown variability in PSI trends with changes in the workload environment.[41] Further studies are needed to fully explore relationships between admission volume and PSIs on teaching services.

It is worthwhile to note that attending physicians have specific responsibilities of supervision and documentation for new admissions. Although it could be argued that new admissions raise the workload for the entire team, and the higher team workload may impact teaching evaluations, previous research has demonstrated that resident burnout and well‐being, which are influenced by workload, do not impact residents' assessments of teachers.[42] In addition, metrics that could arguably be more apt to measure the workload of the team as a whole (eg, team census) did not show a significant association with patient outcomes.

This study has important limitations. First, the cohort of attending physicians, residents, and patients was from a large single institution and may not be generalizable to all settings. Second, most attending physicians in this sample were experienced teachers, so consequences of increased workload may have been managed effectively without a major impact on resident education in some cases. Third, the magnitude of change in teaching effectiveness, although statistically significant, was small and might call into question the educational significance of these findings. Fourth, although resident satisfaction does not influence teaching scores, it is possible that residents' perception of their own workload may have impacted teaching evaluations. Finally, data collection was intentionally closed at the end of the 2011 academic year because accreditation standards for resident duty hours changed again at that time.[43] Thus, these data may not directly reflect the evolving hospital learning environment but serve as a useful benchmark for future studies of workload and teaching effectiveness in the inpatient setting. Once hospitals have had sufficient time and experience with the new duty hour standards, additional studies exploring relationships between workload, teaching effectiveness, and patient outcomes may be warranted.

Limitations notwithstanding, this study shows that attending physician workload may adversely impact teaching and patient safety on internal medicine hospital services. Ongoing efforts by residency programs to optimize the learning environment should include strategies to manage the workload of supervising attendings.

Disclosures

This publication was made possible in part by Clinical and Translational Science Award grant number UL1 TR000135 from the National Center for Advancing Translational Sciences, a component of the National Institutes of Health (NIH). Its contents are solely the responsibility of the authors and do not necessarily represent the official views of NIH. Authors also acknowledge support for the Mayo Clinic Department of Medicine Write‐up and Publish grant. In addition, this study was supported in part by the Mayo Clinic Internal Medicine Residency Office of Education Innovations as part of the Accreditation Council for Graduate Medical Education Educational Innovations Project. The information contained in this article was based in part on the performance package data maintained by the University HealthSystem Consortium. Copyright 2015 UHC. All rights reserved.

Files
References
  1. Smith LG, Humphrey H, Bordley DR. The future of residents' education in internal medicine. Am J Med. 2004;116(9):648650.
  2. Fitzgibbons JP, Bordley DR, Berkowitz LR, Miller BW, Henderson MC. Redesigning residency education in internal medicine: a position paper from the Association of Program Directors in Internal Medicine. Ann Intern Med. 2006;144(12):920926.
  3. O'Malley PG, Khandekar JD, Phillips RA. Residency training in the modern era: the pipe dream of less time to learn more, care better, and be more professional. Arch Intern Med. 2005;165(22):25612562.
  4. Murugiah K, Wang Y, Dodson JA, et al. Trends in Hospitalizations Among Medicare Survivors of Aortic Valve Replacement in the United States From 1999 to 2010. Ann Thorac Surg. 2015;99(2):509517.
  5. O'Connor AB, Lang VJ, Bordley DR. Restructuring an inpatient resident service to improve outcomes for residents, students, and patients. Acad Med. 2011;86(12):15001507.
  6. Kuhn T, Basch P, Barr M, Yackel T. Clinical documentation in the 21st century: executive summary of a policy position paper from the American College of Physicians. Ann Intern Med. 2015;162(4):301303.
  7. Arora V, Meltzer D. Effect of ACGME duty hours on attending physician teaching and satisfaction. Arch Intern Med. 2008;168(11):12261228.
  8. Michtalik HJ, Pronovost PJ, Marsteller JA, Spetz J, Brotman DJ. Identifying potential predictors of a safe attending physician workload: a survey of hospitalists. J Hosp Med. 2013;8(11):644646.
  9. Weiss KB, Bagian JP, Nasca TJ. The clinical learning environment: the foundation of graduate medical education. JAMA. 2013;309(16):16871688.
  10. Auger KA, Landrigan CP, Rey JA, Sieplinga KR, Sucharew HJ, Simmons JM. Better rested, but more stressed? Evidence of the effects of resident work hour restrictions. Acad Pediatr. 2012;12(4):335343.
  11. Lindeman BM, Sacks BC, Hirose K, Lipsett PA. Multifaceted longitudinal study of surgical resident education, quality of life, and patient care before and after July 2011. J Surg Educ. 2013;70(6):769776.
  12. Delaroche A, Riggs T, Maisels MJ. Impact of the new 16‐hour duty period on pediatric interns' neonatal education. Clin Pediatr (Phila). 2014;53(1):5159.
  13. Haney EM, Nicolaidis C, Hunter A, Chan BK, Cooney TG, Bowen JL. Relationship between resident workload and self‐perceived learning on inpatient medicine wards: a longitudinal study. BMC Med Educ. 2006;6:35.
  14. Haferbecker D, Fakeye O, Medina SP, Fieldston ES. Perceptions of educational experience and inpatient workload among pediatric residents. Hosp Pediatr. 2013;3(3):276284.
  15. Arora VM, Georgitis E, Siddique J, et al. Association of workload of on‐call medical interns with on‐call sleep duration, shift duration, and participation in educational activities. JAMA. 2008;300(10):11461153.
  16. Haber LA, Lau CY, Sharpe BA, Arora VM, Farnan JM, Ranji SR. Effects of increased overnight supervision on resident education, decision‐making, and autonomy. J Hosp Med. 2012;7(8):606610.
  17. Drolet BC, Whittle SB, Khokhar MT, Fischer SA, Pallant A. Approval and perceived impact of duty hour regulations: survey of pediatric program directors. Pediatrics. 2013;132(5):819824.
  18. Shea JA, Willett LL, Borman KR, et al. Anticipated consequences of the 2011 duty hours standards: views of internal medicine and surgery program directors. Acad Med. 2012;87(7):895903.
  19. Peterson LE, Johnson H, Pugno PA, Bazemore A, Phillips RL. Training on the clock: family medicine residency directors' responses to resident duty hours reform. Acad Med. 2006;81(12):10321037.
  20. Antiel RM, Thompson SM, Hafferty FW, et al. Duty hour recommendations and implications for meeting the ACGME core competencies: views of residency directors. Mayo Clin Proc. 2011;86(3):185191.
  21. Thomas M, Allen MS, Wigle DA, et al. Does surgeon workload per day affect outcomes after pulmonary lobectomies? Ann Thorac Surg. 2012;94(3):966973.
  22. Michtalik HJ, Yeh HC, Pronovost PJ, Brotman DJ. Impact of attending physician workload on patient care: a survey of hospitalists. JAMA Intern Med. 2013;173(5):375377.
  23. Roshetsky LM, Coltri A, Flores A, et al. No time for teaching? Inpatient attending physicians' workload and teaching before and after the implementation of the 2003 duty hours regulations. Acad Med. 2013;88(9):12931298.
  24. Accreditation Council for Graduate Medical Education. Clinical Learning Environment Review (CLER) Program. Available at: http://www.acgme.org/acgmeweb/tabid/436/ProgramandInstitutionalAccreditation/NextAccreditationSystem/ClinicalLearningEnvironmentReviewProgram.aspx. Accessed April 27, 2015.
  25. Accreditation Council for Graduate Medical Education. Frequently Asked Questions: A ACGME common duty hour requirements. Available at: https://www.acgme.org/acgmeweb/Portals/0/PDFs/dh‐faqs 2011.pdf. Accessed April 27, 2015.
  26. Elliott DJ, Young RS, Brice J, Aguiar R, Kolm P. Effect of hospitalist workload on the quality and efficiency of care. JAMA Intern Med. 2014;174(5):786793.
  27. University HealthSystem Consortium. UHC clinical database/resource manager for Mayo Clinic. Available at: http://www.uhc.edu. Data accessed August 25, 2011.
  28. Beckman TJ, Mandrekar JN. The interpersonal, cognitive and efficiency domains of clinical teaching: construct validity of a multi‐dimensional scale. Med Educ. 2005;39(12):12211229.
  29. Beckman TJ, Cook DA, Mandrekar JN. Factor instability of clinical teaching assessment scores among general internists and cardiologists. Med Educ. 2006;40(12):12091216.
  30. Beckman TJ, Mandrekar JN, Engstler GJ, Ficalora RD. Determining reliability of clinical assessment scores in real time. Teach Learn Med. 2009;21(3):188194.
  31. Reed DA, West CP, Mueller PS, Ficalora RD, Engstler GJ, Beckman TJ. Behaviors of highly professional resident physicians. JAMA. 2008;300(11):13261333.
  32. Thanarajasingam U, McDonald FS, Halvorsen AJ, et al. Service census caps and unit‐based admissions: resident workload, conference attendance, duty hour compliance, and patient safety. Mayo Clin Proc. 2012;87(4):320327.
  33. Agency for Healthcare Research and Quality. Patient safety indicators technical specifications updates—Version 5.0, March 2015. Available at: http://www.qualityindicators.ahrq.gov/Modules/PSI_TechSpec.aspx. Accessed May 29, 2015.
  34. Laurant M, Harmsen M, Wollersheim H, Grol R, Faber M, Sibbald B. The impact of nonphysician clinicians: do they improve the quality and cost‐effectiveness of health care services? Med Care Res Rev. 2009;66(6 suppl):36S89S.
  35. Pascoe JM, Nixon J, Lang VJ. Maximizing teaching on the wards: review and application of the One‐Minute Preceptor and SNAPPS models. J Hosp Med. 2015;10(2):125130.
  36. Luks AM, Smith CS, Robins L, Wipf JE. Resident perceptions of the educational value of night float rotations. Teach Learn Med. 2010;22(3):196201.
  37. Wieland ML, Halvorsen AJ, Chaudhry R, Reed DA, McDonald FS, Thomas KG. An evaluation of internal medicine residency continuity clinic redesign to a 50/50 outpatient‐inpatient model. J Gen Intern Med. 2013;28(8):10141019.
  38. Roses RE, Foley PJ, Paulson EC, et al. Revisiting the rotating call schedule in less than 80 hours per week. J Surg Educ. 2009;66(6):357360.
  39. Zhan C, Miller MR. Excess length of stay, charges, and mortality attributable to medical injuries during hospitalization. JAMA. 2003;290(14):18681874.
  40. Ramanathan R, Leavell P, Wolfe LG, Duane TM. Agency for Healthcare Research and Quality patient safety indicators and mortality in surgical patients. Am Surg. 2014;80(8):801804.
  41. Shelton J, Kummerow K, Phillips S, et al. Patient safety in the era of the 80‐hour workweek. J Surg Educ. 2014;71(4):551559.
  42. Beckman TJ, Reed DA, Shanafelt TD, West CP. Impact of resident well‐being and empathy on assessments of faculty physicians. J Gen Intern Med. 2010;25(1):5256.
  43. Wetzel CM, George A, Hanna GB, et al. Stress management training for surgeons‐a randomized, controlled, intervention study. Ann Surg. 2011;253(3):488494.
Article PDF
Issue
Journal of Hospital Medicine - 11(3)
Publications
Page Number
169-173
Sections
Files
Files
Article PDF
Article PDF

Teaching attending physicians must balance clinical workload and resident education simultaneously while supervising inpatient services. The workload of teaching attendings has been increasing due to many factors. As patient complexity has increased, length of stay has decreased, creating higher turnover and higher acuity of hospitalized patients.[1, 2, 3, 4, 5] The rising burden of clinical documentation has increased demands on inpatient attending physicians' time.[6] Additionally, resident duty hour restrictions have shifted the responsibility for patient care to the teaching attending.[7] These factors contribute to the perception of unsafe workloads among attending physicians[8] and could impact the ability to teach well.

Teaching effectiveness is an important facet of the graduate medical education (GME) learning environment.[9] Residents perceive that education suffers when their own workload increases,[10, 11, 12, 13, 14] and higher on‐call workload is associated with lower likelihood of participation in educational activities.[15] More contact between resident trainees and supervisory staff may improve the clinical value of inpatient rotations.[16] Program directors have expressed concern about the educational ramifications of work compression.[17, 18, 19, 20] Higher workload for attending physicians can negatively impact patient safety and quality of care,[21, 22] and perception of higher attending workload is associated with less time for teaching.[23] However, the impact of objective measures of attending physician workload on educational outcomes has not been explored. When attending physicians are responsible for increasingly complex clinical care in addition to resident education, teaching effectiveness may suffer. With growing emphasis on the educational environment's effect on healthcare quality and safety,[24] it is imperative to consider the influence of attending workload on patient care and resident education.

The combination of increasing clinical demands, fewer hours in‐house for residents, and less time for teaching has the potential to decrease attending physician teaching effectiveness. In this study, we aimed to evaluate relationships among objective measures of attending physician workload, resident perception of teaching effectiveness, and patient outcomes. We hypothesized that higher workload for attending physicians would be associated with lower ratings of teaching effectiveness and poorer outcomes for patients.

METHODS

We performed a retrospective study of attending physicians who supervised inpatient internal medicine teaching services at Mayo ClinicRochester from July 2005 through June 2011 (6 full academic years). The team structure for each service was 1 attending physician, 1 senior resident, and 3 interns. Senior residents were on call every fourth night, and interns were on call every sixth night. Up to 2 admissions per service were received during the daytime short call, and up to 5 admissions per service were received during the overnight long call. Attending physicians included all supervising physicians in appointment categories of attending/consultant, senior associate consultant, and chief medical resident at the Mayo Clinic. Maximum continuous on‐call time for residents during the study period was restricted to 30 hours continuously. The timeframe of this study was chosen to minimize variability in resident work schedules; effective July 1, 2011, duty hours for postgraduate year 1 residents were further restricted to a maximum of 16 hours in duration.[25]

Measures of Attending Physician Workload

To measure attending physician workload, we examined mean service census as reported at midnight, mean patient length of stay, mean number of daily admissions, and mean number of daily discharges. We also calculated mean daily outpatient relative value units (RVUs) generated as a measure of outpatient workload while the attending was supervising the inpatient service. Similar measures of workload have been used in previous research.[26] Attending physicians in this study functioned as hospitalists during their time supervising the teaching services; that is, they were not routinely assigned to any outpatient responsibilities. The only way for an outpatient RVU to be generated during their time supervising the hospital service was for the attending physician to specifically request to see an outpatient in the clinic. Attending physicians only supervised 1 teaching service at a time and had no concurrent nonteaching service obligations. Admissions were received on a rotating basis. Because patient illness severity may impact workload, we also examined mean expected mortality (per 1000 patients) for all patients on the attending physicians' hospital services.[27]

The above workload variables were measured in the specific timeframe that corresponded to the number of days an attending physician was supervising a particular team; for example, mean census was the mean number of patients on the attending physician's hospital service during his or her time supervising that resident team.

Teaching Effectiveness Outcome Measures

Teaching effectiveness was measured using residents' evaluations of their attending physicians with a 5‐point scale (1 = needs improvement, 3 = average, 5 = top 10% of attending physicians) that has been previously validated in similar contexts.[28, 29, 30, 31, 32] The evaluation questions are shown in Supporting Information, Appendix A, in the online version of this article.

Patient Outcome Measures

Patient outcomes included applicable patient safety indicators (PSIs) as defined by the Agency for Healthcare Research and Quality[33] (see Supporting Information, Appendix B, in the online version of this article), patient transfers to the intensive care unit (ICU), calls to the rapid response team/cardiopulmonary resuscitation team, and patient deaths. Each indicator and event was summarized as occurred or did not occur at the service‐team level. For example, for a particular attendingresident team, the occurrence of each of these events at any point during the time they worked together was recorded as occurred (1) or did not occur (0). Similar measures of patient outcomes have been used in previous research.[32]

Statistical Analysis

Mixed linear models with variance components covariance structure (including random effects to account for repeated ratings by residents and of faculty) were fit using restricted maximum likelihood to examine associations of attending workload and demographics with teaching scores. Generalized linear regression models, estimated via generalized estimating equations, were used to examine associations of attending workload and demographics with patient outcomes. Due to the binary nature of the outcomes, the binomial distribution and logit link function were used, producing odds ratios (ORs) for covariates akin to those found in standard logistic regression. Multivariate models were used to adjust for physician demographics including age, gender, teaching appointment (consultant, senior associate consultant/temporary clinical appointment, or chief medical resident) and academic rank (professor, associate professor, assistant professor, instructor/none).

To account for multiple comparisons, a significance level of P < 0.01 was used. All analyses were performed using SAS statistical software (version 9.3; SAS Institute Inc., Cary, NC). This study was deemed minimal risk after review by the Mayo Clinic Institutional Review Board.

RESULTS

Over the 6‐year study period, 107 attending physicians supervised internal medicine teaching services. Twenty‐three percent of teaching attending physicians were female. Mean attending age was 42.6 years. Attendings supervised a given service for between 2 and 19 days (mean [standard deviation] = 10.1 [4.1] days). There were 542 internal medicine residents on these teaching services who completed at least 1 teaching evaluation. A total of 69,386 teaching evaluation items were submitted by these residents during the study period.

In a multivariate analysis adjusted for faculty demographics and workload measures, teaching evaluation scores were significantly higher for attending physicians who had an academic rank of professor when compared to attendings who were assistant professors ( = 0.12, P = 0.007), or instructors/no academic rank ( = 0.23, P < 0.0001). The number of days an attending physician spent with the team showed a positive association with teaching evaluations ( = +0.015, P < 0.0001).

Associations between measures of attending physician workload and teaching evaluation scores are shown in Table 1. Mean midnight census and mean number of daily discharges were associated with lower teaching evaluation scores (both = 0.026, P < 0.0001). Mean number of daily admissions was associated with higher teaching scores ( = +0.021, P = 0.001). The mean expected mortality among hospitalized patients on the services supervised by teaching attendings and the outpatient RVUs generated by these attendings during the time they were supervising the hospital service showed no association with teaching scores. The average number of RVUs generated during an attending's entire time supervising hospital service was <1.

Associations Between Attending Physician Workload and Teaching Evaluation Scores
Attending Physician Workload MeasureMean (SD)Multivariate Analysis*
 SE99% CIP
  • NOTE: Abbreviations: CI, confidence interval; SD, standard deviation; SE, standard error. *Using 69,386 teaching evaluation items submitted by 542 internal medicine residents for 107 attending physicians during the study period. Multivariate model was adjusted for gender, teaching appointment, academic rank, age, and number of days attending physician spent with the team.

Midnight census8.86 (1.8)0.0260.002(0.03, 0.02)<0.0001
Length of stay, d6.91 (3.0)+0.0060.001(0.002, 0.009)<0.0001
Expected mortality (per 1,000 patients)51.94 (27.4)0.00010.0001(0.0004, 0.0001)0.19
Daily admissions2.23 (0.54)+0.0210.006(0.004, 0.037)0.001
Daily discharges2.13 (0.56)0.0260.006(0.041, 0.010)<0.0001
Daily outpatient relative value units0.69 (1.2)+0.0040.003(0.002, 0.011)0.10

Table 2 shows relationships between attending physician workload and patient outcomes for the patients on hospital services supervised by 107 attending physicians during the study period. Patient outcome data showed positive associations between measures of higher workload and PSIs. Specifically, for each 1‐patient increase in the average number of daily admissions to the attending and resident services, the cohort of patients under the team's care was 1.8 times more likely to include at least 1 patient with a PSI event (OR = 1.81, 99% confidence interval [CI]: 1.21, 2.71, P = 0.0001). Likewise, for each 1‐day increase in average length of stay, the cohort of patients under the team's care was 1.16 times more likely to have at least 1 patient with a PSI (OR = 1.16, 99% CI: 1.07, 1.26, P < 0.0001). As anticipated, mean expected mortality was associated with actual mortality, cardiopulmonary resuscitation/rapid response team calls, and ICU transfers. There were no associations between patient outcomes and workload measures of midnight census and outpatient RVUs.

Associations Between Attending Physician Workload and Patient Outcomes
 Patient Outcomes, Multivariate Analysis*
Patient Safety Indicators, n = 513Deaths, n = 352CPR/RRT Calls, n = 409ICU Transfers, n = 737
Workload measuresORSEP99% CIORSEP99% CIORSEP99% CIORSEP99% CI
  • NOTE: Abbreviations: CI, confidence interval; CPR, cardiopulmonary resuscitation; ICU, intensive care unit; OR, odds ratio; RRT, rapid response team; SE, standard error. *Multivariate model was adjusted for gender, teaching appointment, academic rank, age, and number of days the attending physician spent with the team.

Midnight census1.100.050.04(0.98, 1.24)0.910.040.03(0.81, 1.02)0.950.040.16(0.86, 1.05)1.060.040.16(0.96, 1.17)
Length of stay1.160.04<0.0001(1.07, 1.26)1.030.030.39(0.95, 1.12)0.990.030.63(0.92, 1.05)1.100.030.0001(1.03, 1.18)
Expected mortality (per 1,000 patients)1.000.0030.24(0.99, 1.01)1.010.000.002(1.00, 1.02)1.020.00<0.0001(1.01, 1.02)1.010.000.003(1.00, 1.01)
Daily admissions1.810.280.0001(1.21, 2.71)0.780.140.16(0.49, 1.24)1.110.200.57(0.69, 1.77)1.340.240.09(0.85, 2.11)
Daily discharges1.060.130.61(0.78, 1.45)2.360.38<0.0001(1.56, 3.57)0.940.160.70(0.60, 1.46)1.090.160.53(0.75, 1.60)
Daily outpatient relative value units0.810.070.01(0.65, 1.00)1.020.040.56(0.92, 1.13)1.050.040.23(0.95, 1.17)0.920.060.23(0.77, 1.09)

DISCUSSION

This study of internal medicine attending physician workload and resident education demonstrates that higher workload among attending physicians is associated with slightly lower teaching evaluation scores from residents as well as increased risks to patient safety.

The prior literature examining relationships between workload and teaching effectiveness is largely survey‐based and reliant upon physicians' self‐reported perceptions of workload.[10, 13, 23] The present study strengthens this evidence by using multiple objective measures of workload, objective measures of patient safety, and a large sample of teaching evaluations.

An interesting finding in this study was that the number of patient dismissals per day was associated with a significant decrease in teaching scores, whereas the number of admissions per day was associated with increased teaching scores. These findings may seem contradictory, because the number of admissions and discharges both measure physician workload. However, a likely explanation for this apparent inconsistency is that on internal medicine inpatient teaching services, much of the teaching of residents occurs at the time of a patient admission as residents are presenting cases to the attending physician, exploring differential diagnoses, and discussing management plans. By contrast, a patient dismissal tends to consist mainly of patient interaction, paperwork, and phone calls by the resident with less input required from the attending physician. Our findings suggest that although patient admissions remain a rich opportunity for resident education, patient dismissals may increase workload without improving teaching evaluations. As the inpatient hospital environment evolves, exploring options for nonphysician providers to assist with or complete patient dismissals may have a beneficial effect on resident education.[34] In addition, exploring more efficient teaching strategies may be beneficial in the fast‐paced inpatient learning milieu.[35]

There was a statistically significant positive association between the number of days an attending physician spent with the team and teaching evaluations. Although prior work has examined advantages and disadvantages of various resident schedules,[36, 37, 38] our results suggest scheduling models that emphasize continuity of the teaching attending and residents may be preferred to enhance teaching effectiveness. Further study would help elucidate potential implications of this finding for the scheduling of supervisory attendings to optimize education.

In this analysis, patient outcome measures were largely independent of attending physician workload, with the exception of PSIs. PSIs have been associated with longer stays in the hospital,[39, 40] which is consistent with our findings. However, mean daily admissions were also associated with PSIs. It could be expected that the more patients on a hospital service, the more PSIs will result. However, there was not a significant association between midnight census and PSIs when other variables were accounted for. Because new patient admissions are time consuming and contribute to the workload of both residents and attending physicians, it is possible that safety of the service's hospitalized patients is compromised when the team is putting time and effort toward new patients. Previous research has shown variability in PSI trends with changes in the workload environment.[41] Further studies are needed to fully explore relationships between admission volume and PSIs on teaching services.

It is worthwhile to note that attending physicians have specific responsibilities of supervision and documentation for new admissions. Although it could be argued that new admissions raise the workload for the entire team, and the higher team workload may impact teaching evaluations, previous research has demonstrated that resident burnout and well‐being, which are influenced by workload, do not impact residents' assessments of teachers.[42] In addition, metrics that could arguably be more apt to measure the workload of the team as a whole (eg, team census) did not show a significant association with patient outcomes.

This study has important limitations. First, the cohort of attending physicians, residents, and patients was from a large single institution and may not be generalizable to all settings. Second, most attending physicians in this sample were experienced teachers, so consequences of increased workload may have been managed effectively without a major impact on resident education in some cases. Third, the magnitude of change in teaching effectiveness, although statistically significant, was small and might call into question the educational significance of these findings. Fourth, although resident satisfaction does not influence teaching scores, it is possible that residents' perception of their own workload may have impacted teaching evaluations. Finally, data collection was intentionally closed at the end of the 2011 academic year because accreditation standards for resident duty hours changed again at that time.[43] Thus, these data may not directly reflect the evolving hospital learning environment but serve as a useful benchmark for future studies of workload and teaching effectiveness in the inpatient setting. Once hospitals have had sufficient time and experience with the new duty hour standards, additional studies exploring relationships between workload, teaching effectiveness, and patient outcomes may be warranted.

Limitations notwithstanding, this study shows that attending physician workload may adversely impact teaching and patient safety on internal medicine hospital services. Ongoing efforts by residency programs to optimize the learning environment should include strategies to manage the workload of supervising attendings.

Disclosures

This publication was made possible in part by Clinical and Translational Science Award grant number UL1 TR000135 from the National Center for Advancing Translational Sciences, a component of the National Institutes of Health (NIH). Its contents are solely the responsibility of the authors and do not necessarily represent the official views of NIH. Authors also acknowledge support for the Mayo Clinic Department of Medicine Write‐up and Publish grant. In addition, this study was supported in part by the Mayo Clinic Internal Medicine Residency Office of Education Innovations as part of the Accreditation Council for Graduate Medical Education Educational Innovations Project. The information contained in this article was based in part on the performance package data maintained by the University HealthSystem Consortium. Copyright 2015 UHC. All rights reserved.

Teaching attending physicians must balance clinical workload and resident education simultaneously while supervising inpatient services. The workload of teaching attendings has been increasing due to many factors. As patient complexity has increased, length of stay has decreased, creating higher turnover and higher acuity of hospitalized patients.[1, 2, 3, 4, 5] The rising burden of clinical documentation has increased demands on inpatient attending physicians' time.[6] Additionally, resident duty hour restrictions have shifted the responsibility for patient care to the teaching attending.[7] These factors contribute to the perception of unsafe workloads among attending physicians[8] and could impact the ability to teach well.

Teaching effectiveness is an important facet of the graduate medical education (GME) learning environment.[9] Residents perceive that education suffers when their own workload increases,[10, 11, 12, 13, 14] and higher on‐call workload is associated with lower likelihood of participation in educational activities.[15] More contact between resident trainees and supervisory staff may improve the clinical value of inpatient rotations.[16] Program directors have expressed concern about the educational ramifications of work compression.[17, 18, 19, 20] Higher workload for attending physicians can negatively impact patient safety and quality of care,[21, 22] and perception of higher attending workload is associated with less time for teaching.[23] However, the impact of objective measures of attending physician workload on educational outcomes has not been explored. When attending physicians are responsible for increasingly complex clinical care in addition to resident education, teaching effectiveness may suffer. With growing emphasis on the educational environment's effect on healthcare quality and safety,[24] it is imperative to consider the influence of attending workload on patient care and resident education.

The combination of increasing clinical demands, fewer hours in‐house for residents, and less time for teaching has the potential to decrease attending physician teaching effectiveness. In this study, we aimed to evaluate relationships among objective measures of attending physician workload, resident perception of teaching effectiveness, and patient outcomes. We hypothesized that higher workload for attending physicians would be associated with lower ratings of teaching effectiveness and poorer outcomes for patients.

METHODS

We performed a retrospective study of attending physicians who supervised inpatient internal medicine teaching services at Mayo ClinicRochester from July 2005 through June 2011 (6 full academic years). The team structure for each service was 1 attending physician, 1 senior resident, and 3 interns. Senior residents were on call every fourth night, and interns were on call every sixth night. Up to 2 admissions per service were received during the daytime short call, and up to 5 admissions per service were received during the overnight long call. Attending physicians included all supervising physicians in appointment categories of attending/consultant, senior associate consultant, and chief medical resident at the Mayo Clinic. Maximum continuous on‐call time for residents during the study period was restricted to 30 hours continuously. The timeframe of this study was chosen to minimize variability in resident work schedules; effective July 1, 2011, duty hours for postgraduate year 1 residents were further restricted to a maximum of 16 hours in duration.[25]

Measures of Attending Physician Workload

To measure attending physician workload, we examined mean service census as reported at midnight, mean patient length of stay, mean number of daily admissions, and mean number of daily discharges. We also calculated mean daily outpatient relative value units (RVUs) generated as a measure of outpatient workload while the attending was supervising the inpatient service. Similar measures of workload have been used in previous research.[26] Attending physicians in this study functioned as hospitalists during their time supervising the teaching services; that is, they were not routinely assigned to any outpatient responsibilities. The only way for an outpatient RVU to be generated during their time supervising the hospital service was for the attending physician to specifically request to see an outpatient in the clinic. Attending physicians only supervised 1 teaching service at a time and had no concurrent nonteaching service obligations. Admissions were received on a rotating basis. Because patient illness severity may impact workload, we also examined mean expected mortality (per 1000 patients) for all patients on the attending physicians' hospital services.[27]

The above workload variables were measured in the specific timeframe that corresponded to the number of days an attending physician was supervising a particular team; for example, mean census was the mean number of patients on the attending physician's hospital service during his or her time supervising that resident team.

Teaching Effectiveness Outcome Measures

Teaching effectiveness was measured using residents' evaluations of their attending physicians with a 5‐point scale (1 = needs improvement, 3 = average, 5 = top 10% of attending physicians) that has been previously validated in similar contexts.[28, 29, 30, 31, 32] The evaluation questions are shown in Supporting Information, Appendix A, in the online version of this article.

Patient Outcome Measures

Patient outcomes included applicable patient safety indicators (PSIs) as defined by the Agency for Healthcare Research and Quality[33] (see Supporting Information, Appendix B, in the online version of this article), patient transfers to the intensive care unit (ICU), calls to the rapid response team/cardiopulmonary resuscitation team, and patient deaths. Each indicator and event was summarized as occurred or did not occur at the service‐team level. For example, for a particular attendingresident team, the occurrence of each of these events at any point during the time they worked together was recorded as occurred (1) or did not occur (0). Similar measures of patient outcomes have been used in previous research.[32]

Statistical Analysis

Mixed linear models with variance components covariance structure (including random effects to account for repeated ratings by residents and of faculty) were fit using restricted maximum likelihood to examine associations of attending workload and demographics with teaching scores. Generalized linear regression models, estimated via generalized estimating equations, were used to examine associations of attending workload and demographics with patient outcomes. Due to the binary nature of the outcomes, the binomial distribution and logit link function were used, producing odds ratios (ORs) for covariates akin to those found in standard logistic regression. Multivariate models were used to adjust for physician demographics including age, gender, teaching appointment (consultant, senior associate consultant/temporary clinical appointment, or chief medical resident) and academic rank (professor, associate professor, assistant professor, instructor/none).

To account for multiple comparisons, a significance level of P < 0.01 was used. All analyses were performed using SAS statistical software (version 9.3; SAS Institute Inc., Cary, NC). This study was deemed minimal risk after review by the Mayo Clinic Institutional Review Board.

RESULTS

Over the 6‐year study period, 107 attending physicians supervised internal medicine teaching services. Twenty‐three percent of teaching attending physicians were female. Mean attending age was 42.6 years. Attendings supervised a given service for between 2 and 19 days (mean [standard deviation] = 10.1 [4.1] days). There were 542 internal medicine residents on these teaching services who completed at least 1 teaching evaluation. A total of 69,386 teaching evaluation items were submitted by these residents during the study period.

In a multivariate analysis adjusted for faculty demographics and workload measures, teaching evaluation scores were significantly higher for attending physicians who had an academic rank of professor when compared to attendings who were assistant professors ( = 0.12, P = 0.007), or instructors/no academic rank ( = 0.23, P < 0.0001). The number of days an attending physician spent with the team showed a positive association with teaching evaluations ( = +0.015, P < 0.0001).

Associations between measures of attending physician workload and teaching evaluation scores are shown in Table 1. Mean midnight census and mean number of daily discharges were associated with lower teaching evaluation scores (both = 0.026, P < 0.0001). Mean number of daily admissions was associated with higher teaching scores ( = +0.021, P = 0.001). The mean expected mortality among hospitalized patients on the services supervised by teaching attendings and the outpatient RVUs generated by these attendings during the time they were supervising the hospital service showed no association with teaching scores. The average number of RVUs generated during an attending's entire time supervising hospital service was <1.

Associations Between Attending Physician Workload and Teaching Evaluation Scores
Attending Physician Workload MeasureMean (SD)Multivariate Analysis*
 SE99% CIP
  • NOTE: Abbreviations: CI, confidence interval; SD, standard deviation; SE, standard error. *Using 69,386 teaching evaluation items submitted by 542 internal medicine residents for 107 attending physicians during the study period. Multivariate model was adjusted for gender, teaching appointment, academic rank, age, and number of days attending physician spent with the team.

Midnight census8.86 (1.8)0.0260.002(0.03, 0.02)<0.0001
Length of stay, d6.91 (3.0)+0.0060.001(0.002, 0.009)<0.0001
Expected mortality (per 1,000 patients)51.94 (27.4)0.00010.0001(0.0004, 0.0001)0.19
Daily admissions2.23 (0.54)+0.0210.006(0.004, 0.037)0.001
Daily discharges2.13 (0.56)0.0260.006(0.041, 0.010)<0.0001
Daily outpatient relative value units0.69 (1.2)+0.0040.003(0.002, 0.011)0.10

Table 2 shows relationships between attending physician workload and patient outcomes for the patients on hospital services supervised by 107 attending physicians during the study period. Patient outcome data showed positive associations between measures of higher workload and PSIs. Specifically, for each 1‐patient increase in the average number of daily admissions to the attending and resident services, the cohort of patients under the team's care was 1.8 times more likely to include at least 1 patient with a PSI event (OR = 1.81, 99% confidence interval [CI]: 1.21, 2.71, P = 0.0001). Likewise, for each 1‐day increase in average length of stay, the cohort of patients under the team's care was 1.16 times more likely to have at least 1 patient with a PSI (OR = 1.16, 99% CI: 1.07, 1.26, P < 0.0001). As anticipated, mean expected mortality was associated with actual mortality, cardiopulmonary resuscitation/rapid response team calls, and ICU transfers. There were no associations between patient outcomes and workload measures of midnight census and outpatient RVUs.

Associations Between Attending Physician Workload and Patient Outcomes
 Patient Outcomes, Multivariate Analysis*
Patient Safety Indicators, n = 513Deaths, n = 352CPR/RRT Calls, n = 409ICU Transfers, n = 737
Workload measuresORSEP99% CIORSEP99% CIORSEP99% CIORSEP99% CI
  • NOTE: Abbreviations: CI, confidence interval; CPR, cardiopulmonary resuscitation; ICU, intensive care unit; OR, odds ratio; RRT, rapid response team; SE, standard error. *Multivariate model was adjusted for gender, teaching appointment, academic rank, age, and number of days the attending physician spent with the team.

Midnight census1.100.050.04(0.98, 1.24)0.910.040.03(0.81, 1.02)0.950.040.16(0.86, 1.05)1.060.040.16(0.96, 1.17)
Length of stay1.160.04<0.0001(1.07, 1.26)1.030.030.39(0.95, 1.12)0.990.030.63(0.92, 1.05)1.100.030.0001(1.03, 1.18)
Expected mortality (per 1,000 patients)1.000.0030.24(0.99, 1.01)1.010.000.002(1.00, 1.02)1.020.00<0.0001(1.01, 1.02)1.010.000.003(1.00, 1.01)
Daily admissions1.810.280.0001(1.21, 2.71)0.780.140.16(0.49, 1.24)1.110.200.57(0.69, 1.77)1.340.240.09(0.85, 2.11)
Daily discharges1.060.130.61(0.78, 1.45)2.360.38<0.0001(1.56, 3.57)0.940.160.70(0.60, 1.46)1.090.160.53(0.75, 1.60)
Daily outpatient relative value units0.810.070.01(0.65, 1.00)1.020.040.56(0.92, 1.13)1.050.040.23(0.95, 1.17)0.920.060.23(0.77, 1.09)

DISCUSSION

This study of internal medicine attending physician workload and resident education demonstrates that higher workload among attending physicians is associated with slightly lower teaching evaluation scores from residents as well as increased risks to patient safety.

The prior literature examining relationships between workload and teaching effectiveness is largely survey‐based and reliant upon physicians' self‐reported perceptions of workload.[10, 13, 23] The present study strengthens this evidence by using multiple objective measures of workload, objective measures of patient safety, and a large sample of teaching evaluations.

An interesting finding in this study was that the number of patient dismissals per day was associated with a significant decrease in teaching scores, whereas the number of admissions per day was associated with increased teaching scores. These findings may seem contradictory, because the number of admissions and discharges both measure physician workload. However, a likely explanation for this apparent inconsistency is that on internal medicine inpatient teaching services, much of the teaching of residents occurs at the time of a patient admission as residents are presenting cases to the attending physician, exploring differential diagnoses, and discussing management plans. By contrast, a patient dismissal tends to consist mainly of patient interaction, paperwork, and phone calls by the resident with less input required from the attending physician. Our findings suggest that although patient admissions remain a rich opportunity for resident education, patient dismissals may increase workload without improving teaching evaluations. As the inpatient hospital environment evolves, exploring options for nonphysician providers to assist with or complete patient dismissals may have a beneficial effect on resident education.[34] In addition, exploring more efficient teaching strategies may be beneficial in the fast‐paced inpatient learning milieu.[35]

There was a statistically significant positive association between the number of days an attending physician spent with the team and teaching evaluations. Although prior work has examined advantages and disadvantages of various resident schedules,[36, 37, 38] our results suggest scheduling models that emphasize continuity of the teaching attending and residents may be preferred to enhance teaching effectiveness. Further study would help elucidate potential implications of this finding for the scheduling of supervisory attendings to optimize education.

In this analysis, patient outcome measures were largely independent of attending physician workload, with the exception of PSIs. PSIs have been associated with longer stays in the hospital,[39, 40] which is consistent with our findings. However, mean daily admissions were also associated with PSIs. It could be expected that the more patients on a hospital service, the more PSIs will result. However, there was not a significant association between midnight census and PSIs when other variables were accounted for. Because new patient admissions are time consuming and contribute to the workload of both residents and attending physicians, it is possible that safety of the service's hospitalized patients is compromised when the team is putting time and effort toward new patients. Previous research has shown variability in PSI trends with changes in the workload environment.[41] Further studies are needed to fully explore relationships between admission volume and PSIs on teaching services.

It is worthwhile to note that attending physicians have specific responsibilities of supervision and documentation for new admissions. Although it could be argued that new admissions raise the workload for the entire team, and the higher team workload may impact teaching evaluations, previous research has demonstrated that resident burnout and well‐being, which are influenced by workload, do not impact residents' assessments of teachers.[42] In addition, metrics that could arguably be more apt to measure the workload of the team as a whole (eg, team census) did not show a significant association with patient outcomes.

This study has important limitations. First, the cohort of attending physicians, residents, and patients was from a large single institution and may not be generalizable to all settings. Second, most attending physicians in this sample were experienced teachers, so consequences of increased workload may have been managed effectively without a major impact on resident education in some cases. Third, the magnitude of change in teaching effectiveness, although statistically significant, was small and might call into question the educational significance of these findings. Fourth, although resident satisfaction does not influence teaching scores, it is possible that residents' perception of their own workload may have impacted teaching evaluations. Finally, data collection was intentionally closed at the end of the 2011 academic year because accreditation standards for resident duty hours changed again at that time.[43] Thus, these data may not directly reflect the evolving hospital learning environment but serve as a useful benchmark for future studies of workload and teaching effectiveness in the inpatient setting. Once hospitals have had sufficient time and experience with the new duty hour standards, additional studies exploring relationships between workload, teaching effectiveness, and patient outcomes may be warranted.

Limitations notwithstanding, this study shows that attending physician workload may adversely impact teaching and patient safety on internal medicine hospital services. Ongoing efforts by residency programs to optimize the learning environment should include strategies to manage the workload of supervising attendings.

Disclosures

This publication was made possible in part by Clinical and Translational Science Award grant number UL1 TR000135 from the National Center for Advancing Translational Sciences, a component of the National Institutes of Health (NIH). Its contents are solely the responsibility of the authors and do not necessarily represent the official views of NIH. Authors also acknowledge support for the Mayo Clinic Department of Medicine Write‐up and Publish grant. In addition, this study was supported in part by the Mayo Clinic Internal Medicine Residency Office of Education Innovations as part of the Accreditation Council for Graduate Medical Education Educational Innovations Project. The information contained in this article was based in part on the performance package data maintained by the University HealthSystem Consortium. Copyright 2015 UHC. All rights reserved.

References
  1. Smith LG, Humphrey H, Bordley DR. The future of residents' education in internal medicine. Am J Med. 2004;116(9):648650.
  2. Fitzgibbons JP, Bordley DR, Berkowitz LR, Miller BW, Henderson MC. Redesigning residency education in internal medicine: a position paper from the Association of Program Directors in Internal Medicine. Ann Intern Med. 2006;144(12):920926.
  3. O'Malley PG, Khandekar JD, Phillips RA. Residency training in the modern era: the pipe dream of less time to learn more, care better, and be more professional. Arch Intern Med. 2005;165(22):25612562.
  4. Murugiah K, Wang Y, Dodson JA, et al. Trends in Hospitalizations Among Medicare Survivors of Aortic Valve Replacement in the United States From 1999 to 2010. Ann Thorac Surg. 2015;99(2):509517.
  5. O'Connor AB, Lang VJ, Bordley DR. Restructuring an inpatient resident service to improve outcomes for residents, students, and patients. Acad Med. 2011;86(12):15001507.
  6. Kuhn T, Basch P, Barr M, Yackel T. Clinical documentation in the 21st century: executive summary of a policy position paper from the American College of Physicians. Ann Intern Med. 2015;162(4):301303.
  7. Arora V, Meltzer D. Effect of ACGME duty hours on attending physician teaching and satisfaction. Arch Intern Med. 2008;168(11):12261228.
  8. Michtalik HJ, Pronovost PJ, Marsteller JA, Spetz J, Brotman DJ. Identifying potential predictors of a safe attending physician workload: a survey of hospitalists. J Hosp Med. 2013;8(11):644646.
  9. Weiss KB, Bagian JP, Nasca TJ. The clinical learning environment: the foundation of graduate medical education. JAMA. 2013;309(16):16871688.
  10. Auger KA, Landrigan CP, Rey JA, Sieplinga KR, Sucharew HJ, Simmons JM. Better rested, but more stressed? Evidence of the effects of resident work hour restrictions. Acad Pediatr. 2012;12(4):335343.
  11. Lindeman BM, Sacks BC, Hirose K, Lipsett PA. Multifaceted longitudinal study of surgical resident education, quality of life, and patient care before and after July 2011. J Surg Educ. 2013;70(6):769776.
  12. Delaroche A, Riggs T, Maisels MJ. Impact of the new 16‐hour duty period on pediatric interns' neonatal education. Clin Pediatr (Phila). 2014;53(1):5159.
  13. Haney EM, Nicolaidis C, Hunter A, Chan BK, Cooney TG, Bowen JL. Relationship between resident workload and self‐perceived learning on inpatient medicine wards: a longitudinal study. BMC Med Educ. 2006;6:35.
  14. Haferbecker D, Fakeye O, Medina SP, Fieldston ES. Perceptions of educational experience and inpatient workload among pediatric residents. Hosp Pediatr. 2013;3(3):276284.
  15. Arora VM, Georgitis E, Siddique J, et al. Association of workload of on‐call medical interns with on‐call sleep duration, shift duration, and participation in educational activities. JAMA. 2008;300(10):11461153.
  16. Haber LA, Lau CY, Sharpe BA, Arora VM, Farnan JM, Ranji SR. Effects of increased overnight supervision on resident education, decision‐making, and autonomy. J Hosp Med. 2012;7(8):606610.
  17. Drolet BC, Whittle SB, Khokhar MT, Fischer SA, Pallant A. Approval and perceived impact of duty hour regulations: survey of pediatric program directors. Pediatrics. 2013;132(5):819824.
  18. Shea JA, Willett LL, Borman KR, et al. Anticipated consequences of the 2011 duty hours standards: views of internal medicine and surgery program directors. Acad Med. 2012;87(7):895903.
  19. Peterson LE, Johnson H, Pugno PA, Bazemore A, Phillips RL. Training on the clock: family medicine residency directors' responses to resident duty hours reform. Acad Med. 2006;81(12):10321037.
  20. Antiel RM, Thompson SM, Hafferty FW, et al. Duty hour recommendations and implications for meeting the ACGME core competencies: views of residency directors. Mayo Clin Proc. 2011;86(3):185191.
  21. Thomas M, Allen MS, Wigle DA, et al. Does surgeon workload per day affect outcomes after pulmonary lobectomies? Ann Thorac Surg. 2012;94(3):966973.
  22. Michtalik HJ, Yeh HC, Pronovost PJ, Brotman DJ. Impact of attending physician workload on patient care: a survey of hospitalists. JAMA Intern Med. 2013;173(5):375377.
  23. Roshetsky LM, Coltri A, Flores A, et al. No time for teaching? Inpatient attending physicians' workload and teaching before and after the implementation of the 2003 duty hours regulations. Acad Med. 2013;88(9):12931298.
  24. Accreditation Council for Graduate Medical Education. Clinical Learning Environment Review (CLER) Program. Available at: http://www.acgme.org/acgmeweb/tabid/436/ProgramandInstitutionalAccreditation/NextAccreditationSystem/ClinicalLearningEnvironmentReviewProgram.aspx. Accessed April 27, 2015.
  25. Accreditation Council for Graduate Medical Education. Frequently Asked Questions: A ACGME common duty hour requirements. Available at: https://www.acgme.org/acgmeweb/Portals/0/PDFs/dh‐faqs 2011.pdf. Accessed April 27, 2015.
  26. Elliott DJ, Young RS, Brice J, Aguiar R, Kolm P. Effect of hospitalist workload on the quality and efficiency of care. JAMA Intern Med. 2014;174(5):786793.
  27. University HealthSystem Consortium. UHC clinical database/resource manager for Mayo Clinic. Available at: http://www.uhc.edu. Data accessed August 25, 2011.
  28. Beckman TJ, Mandrekar JN. The interpersonal, cognitive and efficiency domains of clinical teaching: construct validity of a multi‐dimensional scale. Med Educ. 2005;39(12):12211229.
  29. Beckman TJ, Cook DA, Mandrekar JN. Factor instability of clinical teaching assessment scores among general internists and cardiologists. Med Educ. 2006;40(12):12091216.
  30. Beckman TJ, Mandrekar JN, Engstler GJ, Ficalora RD. Determining reliability of clinical assessment scores in real time. Teach Learn Med. 2009;21(3):188194.
  31. Reed DA, West CP, Mueller PS, Ficalora RD, Engstler GJ, Beckman TJ. Behaviors of highly professional resident physicians. JAMA. 2008;300(11):13261333.
  32. Thanarajasingam U, McDonald FS, Halvorsen AJ, et al. Service census caps and unit‐based admissions: resident workload, conference attendance, duty hour compliance, and patient safety. Mayo Clin Proc. 2012;87(4):320327.
  33. Agency for Healthcare Research and Quality. Patient safety indicators technical specifications updates—Version 5.0, March 2015. Available at: http://www.qualityindicators.ahrq.gov/Modules/PSI_TechSpec.aspx. Accessed May 29, 2015.
  34. Laurant M, Harmsen M, Wollersheim H, Grol R, Faber M, Sibbald B. The impact of nonphysician clinicians: do they improve the quality and cost‐effectiveness of health care services? Med Care Res Rev. 2009;66(6 suppl):36S89S.
  35. Pascoe JM, Nixon J, Lang VJ. Maximizing teaching on the wards: review and application of the One‐Minute Preceptor and SNAPPS models. J Hosp Med. 2015;10(2):125130.
  36. Luks AM, Smith CS, Robins L, Wipf JE. Resident perceptions of the educational value of night float rotations. Teach Learn Med. 2010;22(3):196201.
  37. Wieland ML, Halvorsen AJ, Chaudhry R, Reed DA, McDonald FS, Thomas KG. An evaluation of internal medicine residency continuity clinic redesign to a 50/50 outpatient‐inpatient model. J Gen Intern Med. 2013;28(8):10141019.
  38. Roses RE, Foley PJ, Paulson EC, et al. Revisiting the rotating call schedule in less than 80 hours per week. J Surg Educ. 2009;66(6):357360.
  39. Zhan C, Miller MR. Excess length of stay, charges, and mortality attributable to medical injuries during hospitalization. JAMA. 2003;290(14):18681874.
  40. Ramanathan R, Leavell P, Wolfe LG, Duane TM. Agency for Healthcare Research and Quality patient safety indicators and mortality in surgical patients. Am Surg. 2014;80(8):801804.
  41. Shelton J, Kummerow K, Phillips S, et al. Patient safety in the era of the 80‐hour workweek. J Surg Educ. 2014;71(4):551559.
  42. Beckman TJ, Reed DA, Shanafelt TD, West CP. Impact of resident well‐being and empathy on assessments of faculty physicians. J Gen Intern Med. 2010;25(1):5256.
  43. Wetzel CM, George A, Hanna GB, et al. Stress management training for surgeons‐a randomized, controlled, intervention study. Ann Surg. 2011;253(3):488494.
References
  1. Smith LG, Humphrey H, Bordley DR. The future of residents' education in internal medicine. Am J Med. 2004;116(9):648650.
  2. Fitzgibbons JP, Bordley DR, Berkowitz LR, Miller BW, Henderson MC. Redesigning residency education in internal medicine: a position paper from the Association of Program Directors in Internal Medicine. Ann Intern Med. 2006;144(12):920926.
  3. O'Malley PG, Khandekar JD, Phillips RA. Residency training in the modern era: the pipe dream of less time to learn more, care better, and be more professional. Arch Intern Med. 2005;165(22):25612562.
  4. Murugiah K, Wang Y, Dodson JA, et al. Trends in Hospitalizations Among Medicare Survivors of Aortic Valve Replacement in the United States From 1999 to 2010. Ann Thorac Surg. 2015;99(2):509517.
  5. O'Connor AB, Lang VJ, Bordley DR. Restructuring an inpatient resident service to improve outcomes for residents, students, and patients. Acad Med. 2011;86(12):15001507.
  6. Kuhn T, Basch P, Barr M, Yackel T. Clinical documentation in the 21st century: executive summary of a policy position paper from the American College of Physicians. Ann Intern Med. 2015;162(4):301303.
  7. Arora V, Meltzer D. Effect of ACGME duty hours on attending physician teaching and satisfaction. Arch Intern Med. 2008;168(11):12261228.
  8. Michtalik HJ, Pronovost PJ, Marsteller JA, Spetz J, Brotman DJ. Identifying potential predictors of a safe attending physician workload: a survey of hospitalists. J Hosp Med. 2013;8(11):644646.
  9. Weiss KB, Bagian JP, Nasca TJ. The clinical learning environment: the foundation of graduate medical education. JAMA. 2013;309(16):16871688.
  10. Auger KA, Landrigan CP, Rey JA, Sieplinga KR, Sucharew HJ, Simmons JM. Better rested, but more stressed? Evidence of the effects of resident work hour restrictions. Acad Pediatr. 2012;12(4):335343.
  11. Lindeman BM, Sacks BC, Hirose K, Lipsett PA. Multifaceted longitudinal study of surgical resident education, quality of life, and patient care before and after July 2011. J Surg Educ. 2013;70(6):769776.
  12. Delaroche A, Riggs T, Maisels MJ. Impact of the new 16‐hour duty period on pediatric interns' neonatal education. Clin Pediatr (Phila). 2014;53(1):5159.
  13. Haney EM, Nicolaidis C, Hunter A, Chan BK, Cooney TG, Bowen JL. Relationship between resident workload and self‐perceived learning on inpatient medicine wards: a longitudinal study. BMC Med Educ. 2006;6:35.
  14. Haferbecker D, Fakeye O, Medina SP, Fieldston ES. Perceptions of educational experience and inpatient workload among pediatric residents. Hosp Pediatr. 2013;3(3):276284.
  15. Arora VM, Georgitis E, Siddique J, et al. Association of workload of on‐call medical interns with on‐call sleep duration, shift duration, and participation in educational activities. JAMA. 2008;300(10):11461153.
  16. Haber LA, Lau CY, Sharpe BA, Arora VM, Farnan JM, Ranji SR. Effects of increased overnight supervision on resident education, decision‐making, and autonomy. J Hosp Med. 2012;7(8):606610.
  17. Drolet BC, Whittle SB, Khokhar MT, Fischer SA, Pallant A. Approval and perceived impact of duty hour regulations: survey of pediatric program directors. Pediatrics. 2013;132(5):819824.
  18. Shea JA, Willett LL, Borman KR, et al. Anticipated consequences of the 2011 duty hours standards: views of internal medicine and surgery program directors. Acad Med. 2012;87(7):895903.
  19. Peterson LE, Johnson H, Pugno PA, Bazemore A, Phillips RL. Training on the clock: family medicine residency directors' responses to resident duty hours reform. Acad Med. 2006;81(12):10321037.
  20. Antiel RM, Thompson SM, Hafferty FW, et al. Duty hour recommendations and implications for meeting the ACGME core competencies: views of residency directors. Mayo Clin Proc. 2011;86(3):185191.
  21. Thomas M, Allen MS, Wigle DA, et al. Does surgeon workload per day affect outcomes after pulmonary lobectomies? Ann Thorac Surg. 2012;94(3):966973.
  22. Michtalik HJ, Yeh HC, Pronovost PJ, Brotman DJ. Impact of attending physician workload on patient care: a survey of hospitalists. JAMA Intern Med. 2013;173(5):375377.
  23. Roshetsky LM, Coltri A, Flores A, et al. No time for teaching? Inpatient attending physicians' workload and teaching before and after the implementation of the 2003 duty hours regulations. Acad Med. 2013;88(9):12931298.
  24. Accreditation Council for Graduate Medical Education. Clinical Learning Environment Review (CLER) Program. Available at: http://www.acgme.org/acgmeweb/tabid/436/ProgramandInstitutionalAccreditation/NextAccreditationSystem/ClinicalLearningEnvironmentReviewProgram.aspx. Accessed April 27, 2015.
  25. Accreditation Council for Graduate Medical Education. Frequently Asked Questions: A ACGME common duty hour requirements. Available at: https://www.acgme.org/acgmeweb/Portals/0/PDFs/dh‐faqs 2011.pdf. Accessed April 27, 2015.
  26. Elliott DJ, Young RS, Brice J, Aguiar R, Kolm P. Effect of hospitalist workload on the quality and efficiency of care. JAMA Intern Med. 2014;174(5):786793.
  27. University HealthSystem Consortium. UHC clinical database/resource manager for Mayo Clinic. Available at: http://www.uhc.edu. Data accessed August 25, 2011.
  28. Beckman TJ, Mandrekar JN. The interpersonal, cognitive and efficiency domains of clinical teaching: construct validity of a multi‐dimensional scale. Med Educ. 2005;39(12):12211229.
  29. Beckman TJ, Cook DA, Mandrekar JN. Factor instability of clinical teaching assessment scores among general internists and cardiologists. Med Educ. 2006;40(12):12091216.
  30. Beckman TJ, Mandrekar JN, Engstler GJ, Ficalora RD. Determining reliability of clinical assessment scores in real time. Teach Learn Med. 2009;21(3):188194.
  31. Reed DA, West CP, Mueller PS, Ficalora RD, Engstler GJ, Beckman TJ. Behaviors of highly professional resident physicians. JAMA. 2008;300(11):13261333.
  32. Thanarajasingam U, McDonald FS, Halvorsen AJ, et al. Service census caps and unit‐based admissions: resident workload, conference attendance, duty hour compliance, and patient safety. Mayo Clin Proc. 2012;87(4):320327.
  33. Agency for Healthcare Research and Quality. Patient safety indicators technical specifications updates—Version 5.0, March 2015. Available at: http://www.qualityindicators.ahrq.gov/Modules/PSI_TechSpec.aspx. Accessed May 29, 2015.
  34. Laurant M, Harmsen M, Wollersheim H, Grol R, Faber M, Sibbald B. The impact of nonphysician clinicians: do they improve the quality and cost‐effectiveness of health care services? Med Care Res Rev. 2009;66(6 suppl):36S89S.
  35. Pascoe JM, Nixon J, Lang VJ. Maximizing teaching on the wards: review and application of the One‐Minute Preceptor and SNAPPS models. J Hosp Med. 2015;10(2):125130.
  36. Luks AM, Smith CS, Robins L, Wipf JE. Resident perceptions of the educational value of night float rotations. Teach Learn Med. 2010;22(3):196201.
  37. Wieland ML, Halvorsen AJ, Chaudhry R, Reed DA, McDonald FS, Thomas KG. An evaluation of internal medicine residency continuity clinic redesign to a 50/50 outpatient‐inpatient model. J Gen Intern Med. 2013;28(8):10141019.
  38. Roses RE, Foley PJ, Paulson EC, et al. Revisiting the rotating call schedule in less than 80 hours per week. J Surg Educ. 2009;66(6):357360.
  39. Zhan C, Miller MR. Excess length of stay, charges, and mortality attributable to medical injuries during hospitalization. JAMA. 2003;290(14):18681874.
  40. Ramanathan R, Leavell P, Wolfe LG, Duane TM. Agency for Healthcare Research and Quality patient safety indicators and mortality in surgical patients. Am Surg. 2014;80(8):801804.
  41. Shelton J, Kummerow K, Phillips S, et al. Patient safety in the era of the 80‐hour workweek. J Surg Educ. 2014;71(4):551559.
  42. Beckman TJ, Reed DA, Shanafelt TD, West CP. Impact of resident well‐being and empathy on assessments of faculty physicians. J Gen Intern Med. 2010;25(1):5256.
  43. Wetzel CM, George A, Hanna GB, et al. Stress management training for surgeons‐a randomized, controlled, intervention study. Ann Surg. 2011;253(3):488494.
Issue
Journal of Hospital Medicine - 11(3)
Issue
Journal of Hospital Medicine - 11(3)
Page Number
169-173
Page Number
169-173
Publications
Publications
Article Type
Display Headline
Associations between attending physician workload, teaching effectiveness, and patient safety
Display Headline
Associations between attending physician workload, teaching effectiveness, and patient safety
Sections
Article Source

© 2016 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Majken Textor Wingo, MD, Division of Primary Care Internal Medicine, Department of Medicine, Mayo Clinic, 200 First Street SW, Rochester, MN 55905; Telephone: 507‐284‐0945; Fax: 507‐266‐1799; E‐mail: wingo.majken@mayo.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Teaching Effectiveness in HM

Article Type
Changed
Tue, 05/16/2017 - 23:09
Display Headline
Associations between teaching effectiveness scores and characteristics of presentations in hospital medicine continuing education

Hospital medicine (HM), which is the fastest growing medical specialty in the United States, includes more than 40,000 healthcare providers.[1] Hospitalists include practitioners from a variety of medical specialties, including internal medicine and pediatrics, and professional backgrounds such as physicians, nurse practitioners. and physician assistants.[2, 3] Originally defined as specialists of inpatient medicine, hospitalists must diagnose and manage a wide variety of clinical conditions, coordinate transitions of care, provide perioperative management to surgical patients, and contribute to quality improvement and hospital administration.[4, 5]

With the evolution of the HM, the need for effective continuing medical education (CME) has become increasingly important. Courses make up the largest percentage of CME activity types,[6] which also include regularly scheduled lecture series, internet materials, and journal‐related CME. Successful CME courses require educational content that matches the learning needs of its participants.[7] In 2006, the Society for Hospital Medicine (SHM) developed core competencies in HM to guide educators in identifying professional practice gaps for useful CME.[8] However, knowing a population's characteristics and learning needs is a key first step to recognizing a practice gap.[9] Understanding these components is important to ensuring that competencies in the field of HM remain relevant to address existing practice gaps.[10] Currently, little is known about the demographic characteristics of participants in HM CME.

Research on the characteristics of effective clinical teachers in medicine has revealed the importance of establishing a positive learning climate, asking questions, diagnosing learners needs, giving feedback, utilizing established teaching frameworks, and developing a personalized philosophy of teaching.[11] Within CME, research has generally demonstrated that courses lead to improvements in lower level outcomes,[12] such as satisfaction and learning, yet higher level outcomes such as behavior change and impacts on patients are inconsistent.[13, 14, 15] Additionally, we have shown that participant reflection on CME is enhanced by presenters who have prior teaching experience and higher teaching effectiveness scores, by the use of audience participation and by incorporating relevant content.[16, 17] Despite the existence of research on CME in general, we are not aware of prior studies regarding characteristics of effective CME in the field of HM.

To better understand and improve the quality of HM CME, we sought to describe the characteristics of participants at a large, national HM CME course, and to identify associations between characteristics of presentations and CME teaching effectiveness (CMETE) scores using a previously validated instrument.

METHODS

Study Design and Participants

This cross‐sectional study included all participants (n=368) and presenters (n=29) at the Mayo Clinic Hospital Medicine Managing Complex Patients (MCP) course in October 2014. MCP is a CME course designed for hospitalists (defined as those who spend most of their professional practice caring for hospitalized patients) and provides up to 24.5 American Medical Association Physician's Recognition Award category 1 credits. The course took place over 4 days and consisted of 32 didactic presentations, which comprised the context for data collection for this study. The structure of the course day consisted of early and late morning sessions, each made up of 3 to 5 presentations, followed by a question and answer session with presenters and a 15‐minute break. The study was deemed exempt by the Mayo Clinic Institutional Review Board.

Independent Variables: Characteristics of Participants and Presentations

Demographic characteristics of participants were obtained through anonymous surveys attached to CME teaching effectiveness forms. Variables included participant sex, professional degree, self‐identified hospitalist, medical specialty, geographic practice location, age, years in practice/level of training, practice setting, American Board of Internal Medicine (ABIM) certification of Focused Practice in Hospital Medicine, number of CME credits earned, and number of CME programs attended in the past year. These variables were selected in an effort to describe potentially relevant demographics of a national cohort of HM CME participants.

Presentation variables included use of clinical cases, audience response system (ARS), number of slides, defined goals/objectives, summary slide and presentation length in minutes, and are supported by previous CME effectiveness research.[16, 17, 18, 19]

Outcome Variable: CME Teaching Effectiveness Scores

The CMETE scores for this study were obtained from an instrument described in our previous research.[16] The instrument contains 7 items on 5‐point scales (range: strongly disagree to strongly agree) that address speaker clarity and organization, relevant content, use of case examples, effective slides, interactive learning methods (eg, audience response), use of supporting evidence, appropriate amount of content, and summary of key points. Additionally, the instrument includes 2 open‐ended questions: (1) What did the speaker do well? (Please describe specific behaviors and examples) and (2) What could the speaker improve on? (Please describe specific behaviors and examples). Validity evidence for CMETE scores included factor analysis demonstrating a unidimensional model for measuring presenter feedback, along with excellent internal consistency and inter‐rater reliability.[16]

Data Analysis

A CMETE score per presentation from each attendee was calculated as the average over the 7 instrument items. A composite presentation‐level CMETE score was then computed as the average overall score within each presentation. CMETE scores were summarized using means and standard deviations (SDs). The overall CMETE scores were compared by presentation characteristics using Kruskal‐Wallis tests. To illustrate the size of observed differences, Cohen effect sizes are presented as the average difference between groups divided by the common SD. All analyses were performed using SAS version 9 (SAS Institute Inc., Cary, NC).

RESULTS

There were 32 presentations during the MCP conference in 2014. A total of 277 (75.2%) out of 368 participants completed the survey. This yielded 7947 CMETE evaluations for analysis, with an average of 28.7 per person (median: 31, interquartile range: 2732, range: 632).

Demographic characteristics of course participants are listed in Table 1. Participants (number, %), described themselves as hospitalists (181, 70.4%), ABIM certified with HM focus (48, 18.8%), physicians with MD or MBBS degrees (181, 70.4%), nurse practitioners or physician assistants (52; 20.2%), and in practice 20 years (73, 28%). The majority of participants (148, 58.3%) worked in private practice, whereas only 63 (24.8%) worked in academic settings.

Participant Characteristics
VariableNo. of Attendees (%), N=277
  • NOTE: Abbreviations: ABIM, American Board of Internal Medicine; CME, continuing medical education; DO, doctor of osteopathic medicine; HM, hospital medicine; IM, internal medicine; MBBS, bachelor of medicine, bachelor of surgery; MD, medical doctor; NP, nurse practitioner; PA, physician assistant.

Sex 
Unknown22
Male124 (48.6%)
Female131 (51.4%)
Age 
Unknown17
2029 years11 (4.2%)
3039 years83 (31.9%)
4049 years61 (23.5%)
5059 years56 (21.5%)
6069 years38 (14.6%)
70+ years11 (4.2%)
Professional degree 
Unknown20
MD/MBBS181 (70.4%)
DO23 (8.9%)
NP28 (10.9%)
PA24 (9.3%)
Other1 (0.4%)
Medical specialty 
Unknown26
Internal medicine149 (59.4%)
Family medicine47 (18.7%)
IM subspecialty14 (5.6%)
Other41 (16.3%)
Geographic location 
Unknown16
Western US48 (18.4%)
Northeastern US33 (12.6%)
Midwestern US98 (37.5%)
Southern US40 (15.3%)
Canada13 (5.0%)
Other29 (11.1%)
Years of practice/training 
Unknown16
Currently in training1 (0.4%)
Practice 04 years68 (26.1%)
Practice 59 years55 (21.1%)
Practice 1019 years64 (24.5%)
Practice 20+ years73 (28.0%)
Practice setting 
Unknown23
Academic63 (24.8%)
Privateurban99 (39.0%)
Privaterural49 (19.3%)
Other43 (16.9%)
ABIM certification HM 
Unknown22
Yes48 (18.8%)
No207 (81.2%)
Hospitalist 
Unknown20
Yes181 (70.4%)
No76 (29.6%)
CME credits claimed 
Unknown20
02454 (21.0%)
2549105 (40.9%)
507461 (23.7%)
759915 (5.8%)
100+22 (8.6%)
CME programs attended 
Unknown18
038 (14.7%)
12166 (64.1%)
3552 (20.1%)
6+3 (1.2%)

CMETE scores (mean [SD]) were significantly associated with the use of ARS (4.64 [0.16]) vs no ARS (4.49 [0.16]; P=0.01, Table 2, Figure 1), longer presentations (30 minutes: 4.67 [0.13] vs <30 minutes: 4.51 [0.18]; P=0.02), and larger number of slides (50: 4.66 [0.17] vs <50: 4.55 [0.17]; P=0.04). There were no significant associations between CMETE scores and use of clinical cases, defined goals, or summary slides.

Associations Between Presentation Characteristics and Validated Continuing Medical Education Teaching Effectiveness Scores
Presentation VariableNo. (%)Mean ScoreStandard DeviationP Value
Use of clinical cases    
Yes28 (87.5%)4.600.180.14
No4 (12.5%)4.490.14 
Audience response system    
Yes20 (62.5%)4.640.160.01
No12 (37.5%)4.490.16 
No. of slides    
5010 (31.3%)4.660.170.04
<5022 (68.8%)4.550.17 
Defined goals/objectives    
Yes29 (90.6%)4.580.180.87
No3 (9.4%)4.610.17 
Summary slide    
Yes22 (68.8%)4.560.180.44
No10 (31.3%)4.620.15 
Presentation length    
30 minutes14 (43.8%)4.670.130.02
<30 minutes18 (56.3%)4.510.18 
Figure 1
Overall teaching effectiveness score distribution by audience response, number of slides, and presentation length. The boxes represent the interquartile range (IQR) (25th to 75th percentiles) with the median (middle horizontal line) and mean (triangle). The dashed lines extend to the last observation within a distance equal to 1.5*IQR from the top and bottom of the box. Any observations beyond that distance are plotted separately. Abbreviations: CME, continuing medical education.

The magnitude of score differences observed in this study are substantial when considered in terms of Cohen's effect sizes. For number of slides, the effect size is 0.65, for audience response the effect size is 0.94, and for presentation length the effect size is approximately 1. According to Cohen, effect sizes of 0.5 to 0.8 are moderate, and effect sizes >0.8 are large. Consequently, the effect sizes of our observed differences are moderate to large.[20, 21]

DISCUSSION

To our knowledge, this is the first study to measure associations between validated teaching effectiveness scores and characteristics of presentations in HM CME. We found that the use of ARS and longer presentations were associated with significantly higher CMETE scores. Our findings have implications for HM CME course directors and presenters as they attempt to develop methods to improve the quality of CME.

CME participants in our study crossed a wide range of ages and experience, which is consistent with national surveys of hospitalists.[22, 23] Interestingly, however, nearly 1 in 3 participants trained in a specialty other than internal medicine. Additionally, the professional degrees of participants were diverse, with 20% of participants having trained as nurse practitioners or physician assistants. These findings are at odds with an early national survey of inpatient practitioners,[22] but consistent with recent literature that the diversity of training backgrounds among hospitalists is increasing as the field of HM evolves.[24] Hospital medicine CME providers will need to be cognizant of these demographic changes as they work to identify practice gaps and apply appropriate educational methods.

The use of an ARS allows for increased participation and engagement among lecture attendees, which in turn promotes active learning.[25, 26, 27] The association of higher teaching scores with the use of ARS is consistent with previous research in other CME settings such as clinical round tables and medical grand rounds.[17, 28] As it pertains to HM specifically, our findings also build upon a recent study by Sehgal et al., which reported on the novel use of bedside CME to enhance interactive learning and discussion among hospitalists, and which was viewed favorably by course participants.[29]

The reasons why longer presentations in our study were linked to higher CMETE scores are not entirely clear, as previous CME research has failed to demonstrate this relationship.[18] One possibility is that course participants prefer learning from presentations that provide granular, content‐rich information. Another possibility may be that characteristics of effective presenters who gave longer presentations and that were not measured in this study, such as presenter experience and expertise, were responsible for the observed increase in CMETE scores. Yet another possibility is that effective presentations were longer due to the use of ARS, which was also associated with better CMETE scores. This explanation may be plausible because the ARS requires additional slides and provides opportunities for audience interaction, which may lengthen the duration of any given presentation.

This study has several limitations. This was a single CME conference sponsored by a large academic medical center, which may limit generalizability, especially to smaller conferences in community settings. However, the audience was large and diverse in terms of participants experiences, practice settings, professional backgrounds, and geographic locations. Furthermore, the demographic characteristics of hospitalists at our course appear very similar to a recently reported national cross‐section of hospitalist groups.[30] Second, this is a cross‐sectional study without a comparison group. Nonetheless, a systematic review showed that most published education research studies involved single‐group designs without comparison groups.[31] Last, the outcomes of the study include attitudes and objectively measured presenter behaviors such as the use of ARS, but not patient‐related outcomes. Nonetheless, evidence indicates that the majority of medical education research does not present outcomes beyond knowledge,[31] and it has been noted that behavior‐related outcomes strike the ideal balance between feasibility and rigor.[32, 33] Finally, the instrument used in this study to measure teaching effectiveness is supported by prior validity evidence.[16]

In summary, we found that hospital medicine CME presentations, which are longer and use audience responses, are associated with greater teaching effectiveness ratings by CME course participants. These findings build upon previous CME research and suggest that CME course directors and presenters should strive to incorporate opportunities that promote audience engagement and participation. Additionally, this study adds to the existing validity of evidence for the CMETE assessment tool. We believe that future research should explore potential associations between teacher effectiveness and patient‐related outcomes, and determine whether course content that reflects the SHM core competencies improves CME teaching effectiveness scores.

Disclosure

Nothing to report.

Files
References
  1. Society of Hospital Medicine. 2013/2014 press kit. Available at: http://www.hospitalmedicine.org/Web/Media_Center/Web/Media_Center/Media_Center.aspx?hkey=e26ceba7-ba93-4e50-8eb1-1ccc75d6f0fd. Accessed May 18, 2015.
  2. Kleinpell RM, Hanson NA, Buchner BR, Winters R, Wilson MJ, Keck AC. Hospitalist services: an evolving opportunity. Nurse Pract. 2008;33:910.
  3. Wall S, Scudamore D, Chin J, et al. The evolving role of the pediatric nurse practitioner in hospital medicine. J Hosp Med. 2014;9:261265.
  4. Wachter RM, Goldman L. The emerging role of “hospitalists” in the American health care system. N Engl J Med. 1996;335:514517.
  5. Society of Hospital Medicine. Definition of a hospitalist and hospital medicine. Available at: http://www.hospitalmedicine.org/Web/About_SHM/Hospitalist_Definition/Web/About_SHM/Industry/Hospital_Medicine_Hospital_Definition.aspx. Accessed February 16, 2015.
  6. Accreditation Council for Continuing Medical Education. 2013 annual report data executive summary. Available at: http://www.accme.org/sites/default/files/630_2013_Annual_Report_20140715_0.pdf. Accessed February 16, 2015.
  7. Muroff LR. The anatomy of an outstanding CME meeting. J Am Coll Radiol. 2005;2:534540.
  8. McKean SC, Budnitz TL, Dressler DD, Amin AN, Pistoria MJ. How to use The Core Competencies in Hospital Medicine: a framework for curriculum development. J Hosp Med. 2006;1:5767.
  9. Wittich CM, Chutka DS, Mauck KF, Berger RA, Litin SC, Beckman TJ. Perspective: a practical approach to defining professional practice gaps for continuing medical education. Acad Med. 2012;87:582585.
  10. Dressler DD, Pistoria MJ, Budnitz TL, McKean SC, Amin AN. Core competencies in hospital medicine: development and methodology. J Hosp Med. 2006;1(suppl 1):148156.
  11. Beckman TJ, Lee MC. Proposal for a collaborative approach to clinical teaching. Mayo Clin Proc. 2009;84:339344.
  12. Beckman TJ, Cook DA. Developing scholarly projects in education: a primer for medical teachers. Med Teach. 2007;29:210218.
  13. Mansouri M, Lockyer J. A meta‐analysis of continuing medical education effectiveness. J Contin Educ Health Prof. 2007;27:615.
  14. Moore DE, Green JS, Gallis HA. Achieving desired results and improved outcomes: integrating planning and assessment throughout learning activities. J Contin Educ Health Prof. 2009;29:115.
  15. Cervero RM, Gaines JK. Effectiveness of continuing medical education: updated synthesis of systematic reviews. Available at: http://www.accme.org/sites/default/files/652_20141104_Effectiveness_of_Continuing_Medical_Education_Cervero_and_Gaines.pdf. Accessed March 25, 2015.
  16. Wittich CM, Mauck KF, Mandrekar JN, et al. Improving participant feedback to continuing medical education presenters in internal medicine: a mixed‐methods study. J Gen Intern Med. 2012;27:425431.
  17. Wittich CM, Szostek JH, Reed DA, et al. Measuring faculty reflection on medical grand rounds at Mayo Clinic: associations with teaching experience, clinical exposure, and presenter effectiveness. Mayo Clin Proc. 2013;88:277284.
  18. Copeland HL, Longworth DL, Hewson MG, Stoller JK. Successful lecturing: a prospective study to validate attributes of the effective medical lecture. J Gen Intern Med. 2000;15:366371.
  19. Shewchuk RM, Schmidt HJ, Benarous A, Bennett NL, Abdolrasulnia M, Casebeer LL. A standardized approach to assessing physician expectations and perceptions of continuing medical education. J Contin Educ Health Prof. 2007;27:173182.
  20. Cohen J. Statistical Power Analysis for the Behavioral Sciences. New York, NY: Academic Press; 1977.
  21. Cohen J. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Hillsdale, NJ: Erlbaum; 1988.
  22. Lindenauer PK, Pantilat SZ, Katz PP, Wachter RM. Hospitalists and the practice of inpatient medicine: results of a survey of the National Association of Inpatient Physicians. Ann Intern Med. 1999;130:343349.
  23. Hinami K, Whelan CT, Wolosin RJ, Miller JA, Wetterneck TB. Worklife and satisfaction of hospitalists: toward flourishing careers. J Gen Intern Med. 2012;27:2836.
  24. Kartha A, Restuccia JD, Burgess JF, et al. Nurse practitioner and physician assistant scope of practice in 118 acute care hospitals. J Hosp Med. 2014;9:615620.
  25. Cain J, Robinson E. A primer on audience response systems: current applications and future considerations. Am J Pharm Educ. 2008;72:77.
  26. Davis N, Davis D, Bloch R. Continuing medical education: AMEE education guide no 35. Med Teach. 2008;30:652666.
  27. Caldwell JE. Clickers in the large classroom: current research and best‐practice tips. CBE Life Sci Educ. 2007;6:920.
  28. Miller RG, Ashar BH, Getz KJ. Evaluation of an audience response system for the continuing education of health professionals. J Contin Educ Health Prof. 2003;23:109115.
  29. Sehgal NL, Wachter RM, Vidyarthi AR. Bringing continuing medical education to the bedside: the University of California, San Francisco Hospitalist Mini‐College. J Hosp Med. 2014;9:129134.
  30. Society of Hospital Medicine. 2014 State of Hospital Medicine Report. Philadelphia, PA: Society of Hospital Medicine; 2014.
  31. Reed DA, Cook DA, Beckman TJ, Levine RB, Kern DE, Wright SM. Association between funding and quality of published medical education research. JAMA. 2007;298:10021009.
  32. Shea JA. Mind the gap: some reasons why medical education research is different from health services research. Med Educ. 2001;35:319320.
  33. Cook DA, Beckman TJ. Reflections on experimental research in medical education. Adv Health Sci Educ Theory Pract. 2010;15:455464.
Article PDF
Issue
Journal of Hospital Medicine - 10(9)
Publications
Page Number
569-573
Sections
Files
Files
Article PDF
Article PDF

Hospital medicine (HM), which is the fastest growing medical specialty in the United States, includes more than 40,000 healthcare providers.[1] Hospitalists include practitioners from a variety of medical specialties, including internal medicine and pediatrics, and professional backgrounds such as physicians, nurse practitioners. and physician assistants.[2, 3] Originally defined as specialists of inpatient medicine, hospitalists must diagnose and manage a wide variety of clinical conditions, coordinate transitions of care, provide perioperative management to surgical patients, and contribute to quality improvement and hospital administration.[4, 5]

With the evolution of the HM, the need for effective continuing medical education (CME) has become increasingly important. Courses make up the largest percentage of CME activity types,[6] which also include regularly scheduled lecture series, internet materials, and journal‐related CME. Successful CME courses require educational content that matches the learning needs of its participants.[7] In 2006, the Society for Hospital Medicine (SHM) developed core competencies in HM to guide educators in identifying professional practice gaps for useful CME.[8] However, knowing a population's characteristics and learning needs is a key first step to recognizing a practice gap.[9] Understanding these components is important to ensuring that competencies in the field of HM remain relevant to address existing practice gaps.[10] Currently, little is known about the demographic characteristics of participants in HM CME.

Research on the characteristics of effective clinical teachers in medicine has revealed the importance of establishing a positive learning climate, asking questions, diagnosing learners needs, giving feedback, utilizing established teaching frameworks, and developing a personalized philosophy of teaching.[11] Within CME, research has generally demonstrated that courses lead to improvements in lower level outcomes,[12] such as satisfaction and learning, yet higher level outcomes such as behavior change and impacts on patients are inconsistent.[13, 14, 15] Additionally, we have shown that participant reflection on CME is enhanced by presenters who have prior teaching experience and higher teaching effectiveness scores, by the use of audience participation and by incorporating relevant content.[16, 17] Despite the existence of research on CME in general, we are not aware of prior studies regarding characteristics of effective CME in the field of HM.

To better understand and improve the quality of HM CME, we sought to describe the characteristics of participants at a large, national HM CME course, and to identify associations between characteristics of presentations and CME teaching effectiveness (CMETE) scores using a previously validated instrument.

METHODS

Study Design and Participants

This cross‐sectional study included all participants (n=368) and presenters (n=29) at the Mayo Clinic Hospital Medicine Managing Complex Patients (MCP) course in October 2014. MCP is a CME course designed for hospitalists (defined as those who spend most of their professional practice caring for hospitalized patients) and provides up to 24.5 American Medical Association Physician's Recognition Award category 1 credits. The course took place over 4 days and consisted of 32 didactic presentations, which comprised the context for data collection for this study. The structure of the course day consisted of early and late morning sessions, each made up of 3 to 5 presentations, followed by a question and answer session with presenters and a 15‐minute break. The study was deemed exempt by the Mayo Clinic Institutional Review Board.

Independent Variables: Characteristics of Participants and Presentations

Demographic characteristics of participants were obtained through anonymous surveys attached to CME teaching effectiveness forms. Variables included participant sex, professional degree, self‐identified hospitalist, medical specialty, geographic practice location, age, years in practice/level of training, practice setting, American Board of Internal Medicine (ABIM) certification of Focused Practice in Hospital Medicine, number of CME credits earned, and number of CME programs attended in the past year. These variables were selected in an effort to describe potentially relevant demographics of a national cohort of HM CME participants.

Presentation variables included use of clinical cases, audience response system (ARS), number of slides, defined goals/objectives, summary slide and presentation length in minutes, and are supported by previous CME effectiveness research.[16, 17, 18, 19]

Outcome Variable: CME Teaching Effectiveness Scores

The CMETE scores for this study were obtained from an instrument described in our previous research.[16] The instrument contains 7 items on 5‐point scales (range: strongly disagree to strongly agree) that address speaker clarity and organization, relevant content, use of case examples, effective slides, interactive learning methods (eg, audience response), use of supporting evidence, appropriate amount of content, and summary of key points. Additionally, the instrument includes 2 open‐ended questions: (1) What did the speaker do well? (Please describe specific behaviors and examples) and (2) What could the speaker improve on? (Please describe specific behaviors and examples). Validity evidence for CMETE scores included factor analysis demonstrating a unidimensional model for measuring presenter feedback, along with excellent internal consistency and inter‐rater reliability.[16]

Data Analysis

A CMETE score per presentation from each attendee was calculated as the average over the 7 instrument items. A composite presentation‐level CMETE score was then computed as the average overall score within each presentation. CMETE scores were summarized using means and standard deviations (SDs). The overall CMETE scores were compared by presentation characteristics using Kruskal‐Wallis tests. To illustrate the size of observed differences, Cohen effect sizes are presented as the average difference between groups divided by the common SD. All analyses were performed using SAS version 9 (SAS Institute Inc., Cary, NC).

RESULTS

There were 32 presentations during the MCP conference in 2014. A total of 277 (75.2%) out of 368 participants completed the survey. This yielded 7947 CMETE evaluations for analysis, with an average of 28.7 per person (median: 31, interquartile range: 2732, range: 632).

Demographic characteristics of course participants are listed in Table 1. Participants (number, %), described themselves as hospitalists (181, 70.4%), ABIM certified with HM focus (48, 18.8%), physicians with MD or MBBS degrees (181, 70.4%), nurse practitioners or physician assistants (52; 20.2%), and in practice 20 years (73, 28%). The majority of participants (148, 58.3%) worked in private practice, whereas only 63 (24.8%) worked in academic settings.

Participant Characteristics
VariableNo. of Attendees (%), N=277
  • NOTE: Abbreviations: ABIM, American Board of Internal Medicine; CME, continuing medical education; DO, doctor of osteopathic medicine; HM, hospital medicine; IM, internal medicine; MBBS, bachelor of medicine, bachelor of surgery; MD, medical doctor; NP, nurse practitioner; PA, physician assistant.

Sex 
Unknown22
Male124 (48.6%)
Female131 (51.4%)
Age 
Unknown17
2029 years11 (4.2%)
3039 years83 (31.9%)
4049 years61 (23.5%)
5059 years56 (21.5%)
6069 years38 (14.6%)
70+ years11 (4.2%)
Professional degree 
Unknown20
MD/MBBS181 (70.4%)
DO23 (8.9%)
NP28 (10.9%)
PA24 (9.3%)
Other1 (0.4%)
Medical specialty 
Unknown26
Internal medicine149 (59.4%)
Family medicine47 (18.7%)
IM subspecialty14 (5.6%)
Other41 (16.3%)
Geographic location 
Unknown16
Western US48 (18.4%)
Northeastern US33 (12.6%)
Midwestern US98 (37.5%)
Southern US40 (15.3%)
Canada13 (5.0%)
Other29 (11.1%)
Years of practice/training 
Unknown16
Currently in training1 (0.4%)
Practice 04 years68 (26.1%)
Practice 59 years55 (21.1%)
Practice 1019 years64 (24.5%)
Practice 20+ years73 (28.0%)
Practice setting 
Unknown23
Academic63 (24.8%)
Privateurban99 (39.0%)
Privaterural49 (19.3%)
Other43 (16.9%)
ABIM certification HM 
Unknown22
Yes48 (18.8%)
No207 (81.2%)
Hospitalist 
Unknown20
Yes181 (70.4%)
No76 (29.6%)
CME credits claimed 
Unknown20
02454 (21.0%)
2549105 (40.9%)
507461 (23.7%)
759915 (5.8%)
100+22 (8.6%)
CME programs attended 
Unknown18
038 (14.7%)
12166 (64.1%)
3552 (20.1%)
6+3 (1.2%)

CMETE scores (mean [SD]) were significantly associated with the use of ARS (4.64 [0.16]) vs no ARS (4.49 [0.16]; P=0.01, Table 2, Figure 1), longer presentations (30 minutes: 4.67 [0.13] vs <30 minutes: 4.51 [0.18]; P=0.02), and larger number of slides (50: 4.66 [0.17] vs <50: 4.55 [0.17]; P=0.04). There were no significant associations between CMETE scores and use of clinical cases, defined goals, or summary slides.

Associations Between Presentation Characteristics and Validated Continuing Medical Education Teaching Effectiveness Scores
Presentation VariableNo. (%)Mean ScoreStandard DeviationP Value
Use of clinical cases    
Yes28 (87.5%)4.600.180.14
No4 (12.5%)4.490.14 
Audience response system    
Yes20 (62.5%)4.640.160.01
No12 (37.5%)4.490.16 
No. of slides    
5010 (31.3%)4.660.170.04
<5022 (68.8%)4.550.17 
Defined goals/objectives    
Yes29 (90.6%)4.580.180.87
No3 (9.4%)4.610.17 
Summary slide    
Yes22 (68.8%)4.560.180.44
No10 (31.3%)4.620.15 
Presentation length    
30 minutes14 (43.8%)4.670.130.02
<30 minutes18 (56.3%)4.510.18 
Figure 1
Overall teaching effectiveness score distribution by audience response, number of slides, and presentation length. The boxes represent the interquartile range (IQR) (25th to 75th percentiles) with the median (middle horizontal line) and mean (triangle). The dashed lines extend to the last observation within a distance equal to 1.5*IQR from the top and bottom of the box. Any observations beyond that distance are plotted separately. Abbreviations: CME, continuing medical education.

The magnitude of score differences observed in this study are substantial when considered in terms of Cohen's effect sizes. For number of slides, the effect size is 0.65, for audience response the effect size is 0.94, and for presentation length the effect size is approximately 1. According to Cohen, effect sizes of 0.5 to 0.8 are moderate, and effect sizes >0.8 are large. Consequently, the effect sizes of our observed differences are moderate to large.[20, 21]

DISCUSSION

To our knowledge, this is the first study to measure associations between validated teaching effectiveness scores and characteristics of presentations in HM CME. We found that the use of ARS and longer presentations were associated with significantly higher CMETE scores. Our findings have implications for HM CME course directors and presenters as they attempt to develop methods to improve the quality of CME.

CME participants in our study crossed a wide range of ages and experience, which is consistent with national surveys of hospitalists.[22, 23] Interestingly, however, nearly 1 in 3 participants trained in a specialty other than internal medicine. Additionally, the professional degrees of participants were diverse, with 20% of participants having trained as nurse practitioners or physician assistants. These findings are at odds with an early national survey of inpatient practitioners,[22] but consistent with recent literature that the diversity of training backgrounds among hospitalists is increasing as the field of HM evolves.[24] Hospital medicine CME providers will need to be cognizant of these demographic changes as they work to identify practice gaps and apply appropriate educational methods.

The use of an ARS allows for increased participation and engagement among lecture attendees, which in turn promotes active learning.[25, 26, 27] The association of higher teaching scores with the use of ARS is consistent with previous research in other CME settings such as clinical round tables and medical grand rounds.[17, 28] As it pertains to HM specifically, our findings also build upon a recent study by Sehgal et al., which reported on the novel use of bedside CME to enhance interactive learning and discussion among hospitalists, and which was viewed favorably by course participants.[29]

The reasons why longer presentations in our study were linked to higher CMETE scores are not entirely clear, as previous CME research has failed to demonstrate this relationship.[18] One possibility is that course participants prefer learning from presentations that provide granular, content‐rich information. Another possibility may be that characteristics of effective presenters who gave longer presentations and that were not measured in this study, such as presenter experience and expertise, were responsible for the observed increase in CMETE scores. Yet another possibility is that effective presentations were longer due to the use of ARS, which was also associated with better CMETE scores. This explanation may be plausible because the ARS requires additional slides and provides opportunities for audience interaction, which may lengthen the duration of any given presentation.

This study has several limitations. This was a single CME conference sponsored by a large academic medical center, which may limit generalizability, especially to smaller conferences in community settings. However, the audience was large and diverse in terms of participants experiences, practice settings, professional backgrounds, and geographic locations. Furthermore, the demographic characteristics of hospitalists at our course appear very similar to a recently reported national cross‐section of hospitalist groups.[30] Second, this is a cross‐sectional study without a comparison group. Nonetheless, a systematic review showed that most published education research studies involved single‐group designs without comparison groups.[31] Last, the outcomes of the study include attitudes and objectively measured presenter behaviors such as the use of ARS, but not patient‐related outcomes. Nonetheless, evidence indicates that the majority of medical education research does not present outcomes beyond knowledge,[31] and it has been noted that behavior‐related outcomes strike the ideal balance between feasibility and rigor.[32, 33] Finally, the instrument used in this study to measure teaching effectiveness is supported by prior validity evidence.[16]

In summary, we found that hospital medicine CME presentations, which are longer and use audience responses, are associated with greater teaching effectiveness ratings by CME course participants. These findings build upon previous CME research and suggest that CME course directors and presenters should strive to incorporate opportunities that promote audience engagement and participation. Additionally, this study adds to the existing validity of evidence for the CMETE assessment tool. We believe that future research should explore potential associations between teacher effectiveness and patient‐related outcomes, and determine whether course content that reflects the SHM core competencies improves CME teaching effectiveness scores.

Disclosure

Nothing to report.

Hospital medicine (HM), which is the fastest growing medical specialty in the United States, includes more than 40,000 healthcare providers.[1] Hospitalists include practitioners from a variety of medical specialties, including internal medicine and pediatrics, and professional backgrounds such as physicians, nurse practitioners. and physician assistants.[2, 3] Originally defined as specialists of inpatient medicine, hospitalists must diagnose and manage a wide variety of clinical conditions, coordinate transitions of care, provide perioperative management to surgical patients, and contribute to quality improvement and hospital administration.[4, 5]

With the evolution of the HM, the need for effective continuing medical education (CME) has become increasingly important. Courses make up the largest percentage of CME activity types,[6] which also include regularly scheduled lecture series, internet materials, and journal‐related CME. Successful CME courses require educational content that matches the learning needs of its participants.[7] In 2006, the Society for Hospital Medicine (SHM) developed core competencies in HM to guide educators in identifying professional practice gaps for useful CME.[8] However, knowing a population's characteristics and learning needs is a key first step to recognizing a practice gap.[9] Understanding these components is important to ensuring that competencies in the field of HM remain relevant to address existing practice gaps.[10] Currently, little is known about the demographic characteristics of participants in HM CME.

Research on the characteristics of effective clinical teachers in medicine has revealed the importance of establishing a positive learning climate, asking questions, diagnosing learners needs, giving feedback, utilizing established teaching frameworks, and developing a personalized philosophy of teaching.[11] Within CME, research has generally demonstrated that courses lead to improvements in lower level outcomes,[12] such as satisfaction and learning, yet higher level outcomes such as behavior change and impacts on patients are inconsistent.[13, 14, 15] Additionally, we have shown that participant reflection on CME is enhanced by presenters who have prior teaching experience and higher teaching effectiveness scores, by the use of audience participation and by incorporating relevant content.[16, 17] Despite the existence of research on CME in general, we are not aware of prior studies regarding characteristics of effective CME in the field of HM.

To better understand and improve the quality of HM CME, we sought to describe the characteristics of participants at a large, national HM CME course, and to identify associations between characteristics of presentations and CME teaching effectiveness (CMETE) scores using a previously validated instrument.

METHODS

Study Design and Participants

This cross‐sectional study included all participants (n=368) and presenters (n=29) at the Mayo Clinic Hospital Medicine Managing Complex Patients (MCP) course in October 2014. MCP is a CME course designed for hospitalists (defined as those who spend most of their professional practice caring for hospitalized patients) and provides up to 24.5 American Medical Association Physician's Recognition Award category 1 credits. The course took place over 4 days and consisted of 32 didactic presentations, which comprised the context for data collection for this study. The structure of the course day consisted of early and late morning sessions, each made up of 3 to 5 presentations, followed by a question and answer session with presenters and a 15‐minute break. The study was deemed exempt by the Mayo Clinic Institutional Review Board.

Independent Variables: Characteristics of Participants and Presentations

Demographic characteristics of participants were obtained through anonymous surveys attached to CME teaching effectiveness forms. Variables included participant sex, professional degree, self‐identified hospitalist, medical specialty, geographic practice location, age, years in practice/level of training, practice setting, American Board of Internal Medicine (ABIM) certification of Focused Practice in Hospital Medicine, number of CME credits earned, and number of CME programs attended in the past year. These variables were selected in an effort to describe potentially relevant demographics of a national cohort of HM CME participants.

Presentation variables included use of clinical cases, audience response system (ARS), number of slides, defined goals/objectives, summary slide and presentation length in minutes, and are supported by previous CME effectiveness research.[16, 17, 18, 19]

Outcome Variable: CME Teaching Effectiveness Scores

The CMETE scores for this study were obtained from an instrument described in our previous research.[16] The instrument contains 7 items on 5‐point scales (range: strongly disagree to strongly agree) that address speaker clarity and organization, relevant content, use of case examples, effective slides, interactive learning methods (eg, audience response), use of supporting evidence, appropriate amount of content, and summary of key points. Additionally, the instrument includes 2 open‐ended questions: (1) What did the speaker do well? (Please describe specific behaviors and examples) and (2) What could the speaker improve on? (Please describe specific behaviors and examples). Validity evidence for CMETE scores included factor analysis demonstrating a unidimensional model for measuring presenter feedback, along with excellent internal consistency and inter‐rater reliability.[16]

Data Analysis

A CMETE score per presentation from each attendee was calculated as the average over the 7 instrument items. A composite presentation‐level CMETE score was then computed as the average overall score within each presentation. CMETE scores were summarized using means and standard deviations (SDs). The overall CMETE scores were compared by presentation characteristics using Kruskal‐Wallis tests. To illustrate the size of observed differences, Cohen effect sizes are presented as the average difference between groups divided by the common SD. All analyses were performed using SAS version 9 (SAS Institute Inc., Cary, NC).

RESULTS

There were 32 presentations during the MCP conference in 2014. A total of 277 (75.2%) out of 368 participants completed the survey. This yielded 7947 CMETE evaluations for analysis, with an average of 28.7 per person (median: 31, interquartile range: 2732, range: 632).

Demographic characteristics of course participants are listed in Table 1. Participants (number, %), described themselves as hospitalists (181, 70.4%), ABIM certified with HM focus (48, 18.8%), physicians with MD or MBBS degrees (181, 70.4%), nurse practitioners or physician assistants (52; 20.2%), and in practice 20 years (73, 28%). The majority of participants (148, 58.3%) worked in private practice, whereas only 63 (24.8%) worked in academic settings.

Participant Characteristics
VariableNo. of Attendees (%), N=277
  • NOTE: Abbreviations: ABIM, American Board of Internal Medicine; CME, continuing medical education; DO, doctor of osteopathic medicine; HM, hospital medicine; IM, internal medicine; MBBS, bachelor of medicine, bachelor of surgery; MD, medical doctor; NP, nurse practitioner; PA, physician assistant.

Sex 
Unknown22
Male124 (48.6%)
Female131 (51.4%)
Age 
Unknown17
2029 years11 (4.2%)
3039 years83 (31.9%)
4049 years61 (23.5%)
5059 years56 (21.5%)
6069 years38 (14.6%)
70+ years11 (4.2%)
Professional degree 
Unknown20
MD/MBBS181 (70.4%)
DO23 (8.9%)
NP28 (10.9%)
PA24 (9.3%)
Other1 (0.4%)
Medical specialty 
Unknown26
Internal medicine149 (59.4%)
Family medicine47 (18.7%)
IM subspecialty14 (5.6%)
Other41 (16.3%)
Geographic location 
Unknown16
Western US48 (18.4%)
Northeastern US33 (12.6%)
Midwestern US98 (37.5%)
Southern US40 (15.3%)
Canada13 (5.0%)
Other29 (11.1%)
Years of practice/training 
Unknown16
Currently in training1 (0.4%)
Practice 04 years68 (26.1%)
Practice 59 years55 (21.1%)
Practice 1019 years64 (24.5%)
Practice 20+ years73 (28.0%)
Practice setting 
Unknown23
Academic63 (24.8%)
Privateurban99 (39.0%)
Privaterural49 (19.3%)
Other43 (16.9%)
ABIM certification HM 
Unknown22
Yes48 (18.8%)
No207 (81.2%)
Hospitalist 
Unknown20
Yes181 (70.4%)
No76 (29.6%)
CME credits claimed 
Unknown20
02454 (21.0%)
2549105 (40.9%)
507461 (23.7%)
759915 (5.8%)
100+22 (8.6%)
CME programs attended 
Unknown18
038 (14.7%)
12166 (64.1%)
3552 (20.1%)
6+3 (1.2%)

CMETE scores (mean [SD]) were significantly associated with the use of ARS (4.64 [0.16]) vs no ARS (4.49 [0.16]; P=0.01, Table 2, Figure 1), longer presentations (30 minutes: 4.67 [0.13] vs <30 minutes: 4.51 [0.18]; P=0.02), and larger number of slides (50: 4.66 [0.17] vs <50: 4.55 [0.17]; P=0.04). There were no significant associations between CMETE scores and use of clinical cases, defined goals, or summary slides.

Associations Between Presentation Characteristics and Validated Continuing Medical Education Teaching Effectiveness Scores
Presentation VariableNo. (%)Mean ScoreStandard DeviationP Value
Use of clinical cases    
Yes28 (87.5%)4.600.180.14
No4 (12.5%)4.490.14 
Audience response system    
Yes20 (62.5%)4.640.160.01
No12 (37.5%)4.490.16 
No. of slides    
5010 (31.3%)4.660.170.04
<5022 (68.8%)4.550.17 
Defined goals/objectives    
Yes29 (90.6%)4.580.180.87
No3 (9.4%)4.610.17 
Summary slide    
Yes22 (68.8%)4.560.180.44
No10 (31.3%)4.620.15 
Presentation length    
30 minutes14 (43.8%)4.670.130.02
<30 minutes18 (56.3%)4.510.18 
Figure 1
Overall teaching effectiveness score distribution by audience response, number of slides, and presentation length. The boxes represent the interquartile range (IQR) (25th to 75th percentiles) with the median (middle horizontal line) and mean (triangle). The dashed lines extend to the last observation within a distance equal to 1.5*IQR from the top and bottom of the box. Any observations beyond that distance are plotted separately. Abbreviations: CME, continuing medical education.

The magnitude of score differences observed in this study are substantial when considered in terms of Cohen's effect sizes. For number of slides, the effect size is 0.65, for audience response the effect size is 0.94, and for presentation length the effect size is approximately 1. According to Cohen, effect sizes of 0.5 to 0.8 are moderate, and effect sizes >0.8 are large. Consequently, the effect sizes of our observed differences are moderate to large.[20, 21]

DISCUSSION

To our knowledge, this is the first study to measure associations between validated teaching effectiveness scores and characteristics of presentations in HM CME. We found that the use of ARS and longer presentations were associated with significantly higher CMETE scores. Our findings have implications for HM CME course directors and presenters as they attempt to develop methods to improve the quality of CME.

CME participants in our study crossed a wide range of ages and experience, which is consistent with national surveys of hospitalists.[22, 23] Interestingly, however, nearly 1 in 3 participants trained in a specialty other than internal medicine. Additionally, the professional degrees of participants were diverse, with 20% of participants having trained as nurse practitioners or physician assistants. These findings are at odds with an early national survey of inpatient practitioners,[22] but consistent with recent literature that the diversity of training backgrounds among hospitalists is increasing as the field of HM evolves.[24] Hospital medicine CME providers will need to be cognizant of these demographic changes as they work to identify practice gaps and apply appropriate educational methods.

The use of an ARS allows for increased participation and engagement among lecture attendees, which in turn promotes active learning.[25, 26, 27] The association of higher teaching scores with the use of ARS is consistent with previous research in other CME settings such as clinical round tables and medical grand rounds.[17, 28] As it pertains to HM specifically, our findings also build upon a recent study by Sehgal et al., which reported on the novel use of bedside CME to enhance interactive learning and discussion among hospitalists, and which was viewed favorably by course participants.[29]

The reasons why longer presentations in our study were linked to higher CMETE scores are not entirely clear, as previous CME research has failed to demonstrate this relationship.[18] One possibility is that course participants prefer learning from presentations that provide granular, content‐rich information. Another possibility may be that characteristics of effective presenters who gave longer presentations and that were not measured in this study, such as presenter experience and expertise, were responsible for the observed increase in CMETE scores. Yet another possibility is that effective presentations were longer due to the use of ARS, which was also associated with better CMETE scores. This explanation may be plausible because the ARS requires additional slides and provides opportunities for audience interaction, which may lengthen the duration of any given presentation.

This study has several limitations. This was a single CME conference sponsored by a large academic medical center, which may limit generalizability, especially to smaller conferences in community settings. However, the audience was large and diverse in terms of participants experiences, practice settings, professional backgrounds, and geographic locations. Furthermore, the demographic characteristics of hospitalists at our course appear very similar to a recently reported national cross‐section of hospitalist groups.[30] Second, this is a cross‐sectional study without a comparison group. Nonetheless, a systematic review showed that most published education research studies involved single‐group designs without comparison groups.[31] Last, the outcomes of the study include attitudes and objectively measured presenter behaviors such as the use of ARS, but not patient‐related outcomes. Nonetheless, evidence indicates that the majority of medical education research does not present outcomes beyond knowledge,[31] and it has been noted that behavior‐related outcomes strike the ideal balance between feasibility and rigor.[32, 33] Finally, the instrument used in this study to measure teaching effectiveness is supported by prior validity evidence.[16]

In summary, we found that hospital medicine CME presentations, which are longer and use audience responses, are associated with greater teaching effectiveness ratings by CME course participants. These findings build upon previous CME research and suggest that CME course directors and presenters should strive to incorporate opportunities that promote audience engagement and participation. Additionally, this study adds to the existing validity of evidence for the CMETE assessment tool. We believe that future research should explore potential associations between teacher effectiveness and patient‐related outcomes, and determine whether course content that reflects the SHM core competencies improves CME teaching effectiveness scores.

Disclosure

Nothing to report.

References
  1. Society of Hospital Medicine. 2013/2014 press kit. Available at: http://www.hospitalmedicine.org/Web/Media_Center/Web/Media_Center/Media_Center.aspx?hkey=e26ceba7-ba93-4e50-8eb1-1ccc75d6f0fd. Accessed May 18, 2015.
  2. Kleinpell RM, Hanson NA, Buchner BR, Winters R, Wilson MJ, Keck AC. Hospitalist services: an evolving opportunity. Nurse Pract. 2008;33:910.
  3. Wall S, Scudamore D, Chin J, et al. The evolving role of the pediatric nurse practitioner in hospital medicine. J Hosp Med. 2014;9:261265.
  4. Wachter RM, Goldman L. The emerging role of “hospitalists” in the American health care system. N Engl J Med. 1996;335:514517.
  5. Society of Hospital Medicine. Definition of a hospitalist and hospital medicine. Available at: http://www.hospitalmedicine.org/Web/About_SHM/Hospitalist_Definition/Web/About_SHM/Industry/Hospital_Medicine_Hospital_Definition.aspx. Accessed February 16, 2015.
  6. Accreditation Council for Continuing Medical Education. 2013 annual report data executive summary. Available at: http://www.accme.org/sites/default/files/630_2013_Annual_Report_20140715_0.pdf. Accessed February 16, 2015.
  7. Muroff LR. The anatomy of an outstanding CME meeting. J Am Coll Radiol. 2005;2:534540.
  8. McKean SC, Budnitz TL, Dressler DD, Amin AN, Pistoria MJ. How to use The Core Competencies in Hospital Medicine: a framework for curriculum development. J Hosp Med. 2006;1:5767.
  9. Wittich CM, Chutka DS, Mauck KF, Berger RA, Litin SC, Beckman TJ. Perspective: a practical approach to defining professional practice gaps for continuing medical education. Acad Med. 2012;87:582585.
  10. Dressler DD, Pistoria MJ, Budnitz TL, McKean SC, Amin AN. Core competencies in hospital medicine: development and methodology. J Hosp Med. 2006;1(suppl 1):148156.
  11. Beckman TJ, Lee MC. Proposal for a collaborative approach to clinical teaching. Mayo Clin Proc. 2009;84:339344.
  12. Beckman TJ, Cook DA. Developing scholarly projects in education: a primer for medical teachers. Med Teach. 2007;29:210218.
  13. Mansouri M, Lockyer J. A meta‐analysis of continuing medical education effectiveness. J Contin Educ Health Prof. 2007;27:615.
  14. Moore DE, Green JS, Gallis HA. Achieving desired results and improved outcomes: integrating planning and assessment throughout learning activities. J Contin Educ Health Prof. 2009;29:115.
  15. Cervero RM, Gaines JK. Effectiveness of continuing medical education: updated synthesis of systematic reviews. Available at: http://www.accme.org/sites/default/files/652_20141104_Effectiveness_of_Continuing_Medical_Education_Cervero_and_Gaines.pdf. Accessed March 25, 2015.
  16. Wittich CM, Mauck KF, Mandrekar JN, et al. Improving participant feedback to continuing medical education presenters in internal medicine: a mixed‐methods study. J Gen Intern Med. 2012;27:425431.
  17. Wittich CM, Szostek JH, Reed DA, et al. Measuring faculty reflection on medical grand rounds at Mayo Clinic: associations with teaching experience, clinical exposure, and presenter effectiveness. Mayo Clin Proc. 2013;88:277284.
  18. Copeland HL, Longworth DL, Hewson MG, Stoller JK. Successful lecturing: a prospective study to validate attributes of the effective medical lecture. J Gen Intern Med. 2000;15:366371.
  19. Shewchuk RM, Schmidt HJ, Benarous A, Bennett NL, Abdolrasulnia M, Casebeer LL. A standardized approach to assessing physician expectations and perceptions of continuing medical education. J Contin Educ Health Prof. 2007;27:173182.
  20. Cohen J. Statistical Power Analysis for the Behavioral Sciences. New York, NY: Academic Press; 1977.
  21. Cohen J. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Hillsdale, NJ: Erlbaum; 1988.
  22. Lindenauer PK, Pantilat SZ, Katz PP, Wachter RM. Hospitalists and the practice of inpatient medicine: results of a survey of the National Association of Inpatient Physicians. Ann Intern Med. 1999;130:343349.
  23. Hinami K, Whelan CT, Wolosin RJ, Miller JA, Wetterneck TB. Worklife and satisfaction of hospitalists: toward flourishing careers. J Gen Intern Med. 2012;27:2836.
  24. Kartha A, Restuccia JD, Burgess JF, et al. Nurse practitioner and physician assistant scope of practice in 118 acute care hospitals. J Hosp Med. 2014;9:615620.
  25. Cain J, Robinson E. A primer on audience response systems: current applications and future considerations. Am J Pharm Educ. 2008;72:77.
  26. Davis N, Davis D, Bloch R. Continuing medical education: AMEE education guide no 35. Med Teach. 2008;30:652666.
  27. Caldwell JE. Clickers in the large classroom: current research and best‐practice tips. CBE Life Sci Educ. 2007;6:920.
  28. Miller RG, Ashar BH, Getz KJ. Evaluation of an audience response system for the continuing education of health professionals. J Contin Educ Health Prof. 2003;23:109115.
  29. Sehgal NL, Wachter RM, Vidyarthi AR. Bringing continuing medical education to the bedside: the University of California, San Francisco Hospitalist Mini‐College. J Hosp Med. 2014;9:129134.
  30. Society of Hospital Medicine. 2014 State of Hospital Medicine Report. Philadelphia, PA: Society of Hospital Medicine; 2014.
  31. Reed DA, Cook DA, Beckman TJ, Levine RB, Kern DE, Wright SM. Association between funding and quality of published medical education research. JAMA. 2007;298:10021009.
  32. Shea JA. Mind the gap: some reasons why medical education research is different from health services research. Med Educ. 2001;35:319320.
  33. Cook DA, Beckman TJ. Reflections on experimental research in medical education. Adv Health Sci Educ Theory Pract. 2010;15:455464.
References
  1. Society of Hospital Medicine. 2013/2014 press kit. Available at: http://www.hospitalmedicine.org/Web/Media_Center/Web/Media_Center/Media_Center.aspx?hkey=e26ceba7-ba93-4e50-8eb1-1ccc75d6f0fd. Accessed May 18, 2015.
  2. Kleinpell RM, Hanson NA, Buchner BR, Winters R, Wilson MJ, Keck AC. Hospitalist services: an evolving opportunity. Nurse Pract. 2008;33:910.
  3. Wall S, Scudamore D, Chin J, et al. The evolving role of the pediatric nurse practitioner in hospital medicine. J Hosp Med. 2014;9:261265.
  4. Wachter RM, Goldman L. The emerging role of “hospitalists” in the American health care system. N Engl J Med. 1996;335:514517.
  5. Society of Hospital Medicine. Definition of a hospitalist and hospital medicine. Available at: http://www.hospitalmedicine.org/Web/About_SHM/Hospitalist_Definition/Web/About_SHM/Industry/Hospital_Medicine_Hospital_Definition.aspx. Accessed February 16, 2015.
  6. Accreditation Council for Continuing Medical Education. 2013 annual report data executive summary. Available at: http://www.accme.org/sites/default/files/630_2013_Annual_Report_20140715_0.pdf. Accessed February 16, 2015.
  7. Muroff LR. The anatomy of an outstanding CME meeting. J Am Coll Radiol. 2005;2:534540.
  8. McKean SC, Budnitz TL, Dressler DD, Amin AN, Pistoria MJ. How to use The Core Competencies in Hospital Medicine: a framework for curriculum development. J Hosp Med. 2006;1:5767.
  9. Wittich CM, Chutka DS, Mauck KF, Berger RA, Litin SC, Beckman TJ. Perspective: a practical approach to defining professional practice gaps for continuing medical education. Acad Med. 2012;87:582585.
  10. Dressler DD, Pistoria MJ, Budnitz TL, McKean SC, Amin AN. Core competencies in hospital medicine: development and methodology. J Hosp Med. 2006;1(suppl 1):148156.
  11. Beckman TJ, Lee MC. Proposal for a collaborative approach to clinical teaching. Mayo Clin Proc. 2009;84:339344.
  12. Beckman TJ, Cook DA. Developing scholarly projects in education: a primer for medical teachers. Med Teach. 2007;29:210218.
  13. Mansouri M, Lockyer J. A meta‐analysis of continuing medical education effectiveness. J Contin Educ Health Prof. 2007;27:615.
  14. Moore DE, Green JS, Gallis HA. Achieving desired results and improved outcomes: integrating planning and assessment throughout learning activities. J Contin Educ Health Prof. 2009;29:115.
  15. Cervero RM, Gaines JK. Effectiveness of continuing medical education: updated synthesis of systematic reviews. Available at: http://www.accme.org/sites/default/files/652_20141104_Effectiveness_of_Continuing_Medical_Education_Cervero_and_Gaines.pdf. Accessed March 25, 2015.
  16. Wittich CM, Mauck KF, Mandrekar JN, et al. Improving participant feedback to continuing medical education presenters in internal medicine: a mixed‐methods study. J Gen Intern Med. 2012;27:425431.
  17. Wittich CM, Szostek JH, Reed DA, et al. Measuring faculty reflection on medical grand rounds at Mayo Clinic: associations with teaching experience, clinical exposure, and presenter effectiveness. Mayo Clin Proc. 2013;88:277284.
  18. Copeland HL, Longworth DL, Hewson MG, Stoller JK. Successful lecturing: a prospective study to validate attributes of the effective medical lecture. J Gen Intern Med. 2000;15:366371.
  19. Shewchuk RM, Schmidt HJ, Benarous A, Bennett NL, Abdolrasulnia M, Casebeer LL. A standardized approach to assessing physician expectations and perceptions of continuing medical education. J Contin Educ Health Prof. 2007;27:173182.
  20. Cohen J. Statistical Power Analysis for the Behavioral Sciences. New York, NY: Academic Press; 1977.
  21. Cohen J. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Hillsdale, NJ: Erlbaum; 1988.
  22. Lindenauer PK, Pantilat SZ, Katz PP, Wachter RM. Hospitalists and the practice of inpatient medicine: results of a survey of the National Association of Inpatient Physicians. Ann Intern Med. 1999;130:343349.
  23. Hinami K, Whelan CT, Wolosin RJ, Miller JA, Wetterneck TB. Worklife and satisfaction of hospitalists: toward flourishing careers. J Gen Intern Med. 2012;27:2836.
  24. Kartha A, Restuccia JD, Burgess JF, et al. Nurse practitioner and physician assistant scope of practice in 118 acute care hospitals. J Hosp Med. 2014;9:615620.
  25. Cain J, Robinson E. A primer on audience response systems: current applications and future considerations. Am J Pharm Educ. 2008;72:77.
  26. Davis N, Davis D, Bloch R. Continuing medical education: AMEE education guide no 35. Med Teach. 2008;30:652666.
  27. Caldwell JE. Clickers in the large classroom: current research and best‐practice tips. CBE Life Sci Educ. 2007;6:920.
  28. Miller RG, Ashar BH, Getz KJ. Evaluation of an audience response system for the continuing education of health professionals. J Contin Educ Health Prof. 2003;23:109115.
  29. Sehgal NL, Wachter RM, Vidyarthi AR. Bringing continuing medical education to the bedside: the University of California, San Francisco Hospitalist Mini‐College. J Hosp Med. 2014;9:129134.
  30. Society of Hospital Medicine. 2014 State of Hospital Medicine Report. Philadelphia, PA: Society of Hospital Medicine; 2014.
  31. Reed DA, Cook DA, Beckman TJ, Levine RB, Kern DE, Wright SM. Association between funding and quality of published medical education research. JAMA. 2007;298:10021009.
  32. Shea JA. Mind the gap: some reasons why medical education research is different from health services research. Med Educ. 2001;35:319320.
  33. Cook DA, Beckman TJ. Reflections on experimental research in medical education. Adv Health Sci Educ Theory Pract. 2010;15:455464.
Issue
Journal of Hospital Medicine - 10(9)
Issue
Journal of Hospital Medicine - 10(9)
Page Number
569-573
Page Number
569-573
Publications
Publications
Article Type
Display Headline
Associations between teaching effectiveness scores and characteristics of presentations in hospital medicine continuing education
Display Headline
Associations between teaching effectiveness scores and characteristics of presentations in hospital medicine continuing education
Sections
Article Source

© 2015 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: John T. Ratelle, MD, Mayo Clinic Rochester, 200 First Street SW, Rochester, MN 55905; Telephone: 507‐284‐3589; Fax: 507‐255‐9189; E‐mail: ratelle.john@mayo.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files