User login
Patient satisfaction has received increased attention in recent years, which we believe is well deserved and long overdue. Anyone who has been hospitalized, or has had a loved one hospitalized, can appreciate that there is room to improve the patient experience. Dedicating time and effort to improving the patient experience is consistent with our professional commitment to comfort, empathize, and partner with our patients. Though patient satisfaction itself is an outcome worthy of our attention, it is also positively associated with measures related to patient safety and clinical effectiveness.[1, 2] Moreover, patient satisfaction is the only publicly reported measure that represents the patient's voice,[3] and accounts for a substantial portion of the Centers for Medicare and Medicaid Services payment adjustments under the Hospital Value Based Purchasing Program.[4]
However, all healthcare professionals should understand some key fundamental issues related to the measurement of patient satisfaction. The survey from which data are publicly reported and used for hospital payment adjustment is the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey, developed by the Agency for Healthcare Research and Quality.[5, 6] HCAHPS is sent to a random sample of 40% of hospitalized patients between 48 hours and 6 weeks after discharge. The HCAHPS survey uses ordinal response scales (eg, never, sometimes, usually, always) that generate highly skewed results toward favorable responses. Therefore, results are reported as the percent top box (ie, the percentage of responses in the most favorable category) rather than as a median score. The skewed distribution of results indicates that most patients are generally satisfied with care (ie, most respondents do not have an axe to grind), but also makes meaningful improvement difficult to achieve. Prior to public reporting and determination of effect on hospital payment, results are adjusted for mode of survey administration and patient mix. The same is not true when patient satisfaction data are used for internal purposes. Hospital leaders typically do not perform statistical adjustment and therefore need to be careful not to make apples‐to‐orangestype comparisons. For example, obstetric patient satisfaction scores should not be compared to general medical patient satisfaction scores, as these populations tend to rate satisfaction differently.
The HCAHPS survey questions are organized into domains of care, including satisfaction with nurses and satisfaction with doctors. Importantly, other healthcare team members may influence patients' perception in these domains. For example, a patient responding to nurse communication questions may also reflect on experiences with patient care technicians, social workers, and therapists. A patient responding to physician communication questions might also reflect on experiences with advanced practice providers. A common mistake is the practice of attributing satisfaction with doctors to the individual who served as the discharge physician. Many readers have likely seen patient satisfaction reports broken out by discharge physician with the expectation that giving this information to individual physicians will serve as useful formative feedback. The reality is that patients see many doctors during a hospitalization. To illustrate this point, we analyzed data from 420 patients admitted to our nonteaching hospitalist service who had completed an HCAHPS survey in 2014. We found that the discharge hospitalist accounted for only 34% of all physician encounters. Furthermore, research has shown that patients' experiences with specialist physicians also have a strong influence on their overall satisfaction with physicians.[7]
Having reliable patient satisfaction data on specific individuals would be a truly powerful formative assessment tool. In this issue of the Journal of Hospital Medicine, Banka and colleagues report on an impressive approach incorporating such a tool to give constructive feedback to physicians.[8] Since 2006, the study site had administered surveys to hospitalized patients that assess their satisfaction with specific resident physicians.[9] However, residency programs only reviewed the survey results with resident physicians about twice a year. The multifaceted intervention developed by Banka and colleagues included directly emailing the survey results to internal medicine resident physicians in real time while they were in service, a 1‐hour conference on best communication practices, and a reward program in which 3 residents were identified monthly to receive department‐wide recognition via email and a generous movie package. Using difference‐in‐differences regression analysis, the investigators compared changes in patient satisfaction results for internal medicine residents to results for residents from other specialties (who were not part of the intervention). The percentage of patients who gave top box responses to all 3 physician‐related questions and to the overall hospital rating was significantly higher for the internal medicine residents.
The findings from this study are important, because no prior study of an intervention, to our knowledge, has shown a significant improvement in patient satisfaction scores. In this study, feedback was believed to be the most powerful factor. The importance of meaningful, timely feedback in medical education is well recognized.[10] Without feedback there is poor insight into how intended results from specific actions compare with actual results. When feedback is lacking from external sources (in this case the voice of the patient), an uncontested sense of mastery develops, allowing mistakes to go uncorrected. This false sense of mastery contributes to an emotional and defensive response when performance is finally revealed to be less than optimal. The simple act of giving more timely feedback in this study encouraged self‐motivated reflection and practice change aimed at improving patient satisfaction, with remarkable results.
The study should inspire physician leaders from various hospital settings, and researchers, to develop and evaluate similar programs to improve patient satisfaction. We agree with the investigators that the approach should be multifaceted. Feedback to specific physicians is a powerful motivator, but needs to be combined with strategies to enhance communication skills. Brief conferences are less likely to have a lasting impact on behaviors than strategies like coaching and simulation based training.[11] Interventions should include recognition and reward to acknowledge exceptional performance and build friendly competition.
The biggest challenge to adopting an intervention such as the one used in the Banka study relates to the feasibility of implementing physician‐specific patient satisfaction reporting. Several survey instruments are available for use as tools to assess satisfaction with specific physicians.[9, 12, 13] However, who will administer these instruments? Most hospitals do not have undergraduate students available. Hospitals could use their volunteers, but this is not likely to be a sustainable solution. Hospitals could consider administering the survey via email, but many hospitals are just starting to collect patient email addresses and many patients do not use email. Once data are collected, who will conduct analyses and create comparative reports? Press Ganey recently developed a survey assessing satisfaction with specific hospitalists, using photographs, and offers the ability to create comparative reports.[14] Their service addresses the analytic challenge, but the quandary of survey administration remains.
In conclusion, we encourage hospital medicine leaders to develop and evaluate multifaceted interventions to improve patient satisfaction such as the one reported by Banka et al. Timely, specific feedback to physicians is an essential feature. The collection of physician‐specific data is a major challenge, but not an insurmountable one. Novel use of personnel and/or technology is likely to play a role in these efforts.
Disclosure: Nothing to report.
- A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013;3(1). , , .
- Patients' perception of hospital care in the United States. N Engl J Med. 2008;359(18):1921–1931. , , , .
- Medicare.gov. Hospital Compare. Available at: http://www.medicare.gov/hospitalcompare/search.html. Accessed April 27, 2015.
- Centers for Medicare 67(1):27–37.
- Who's behind an HCAHPS score? Jt Comm J Qual Patient Saf. 2011;37(10):461–468. , , , et al.
- Improving patient satisfaction through physician education, feedback, and incentives. J Hosp Med. 2015;10(8):497–502. , , , et al.
- Promoting patient‐centred care through trainee feedback: assessing residents' C‐I‐CARE (ARC) program. BMJ Qual Saf. 2012;21(3):225–233. , , , .
- Feedback in clinical medical education. JAMA. 1983;250(6):777–781. .
- Impact of hospitalist communication‐skills training on patient‐satisfaction scores. J Hosp Med. 2013;8(6):315–320. , , , .
- Assessing patient perceptions of hospitalist communication skills using the Communication Assessment Tool (CAT). J Hosp Med. 2010;5(9):522–527. , , , , , .
- Development and validation of the tool to assess inpatient satisfaction with care from hospitalists. J Hosp Med. 2014;9(9):553–558. , , , , , .
- Press Ganey. A true performance solution for hospitalists. Available at: http://www.pressganey.com/ourSolutions/patient‐voice/census‐based‐surveying/hospitalist.aspx. Accessed April 27, 2015.
Patient satisfaction has received increased attention in recent years, which we believe is well deserved and long overdue. Anyone who has been hospitalized, or has had a loved one hospitalized, can appreciate that there is room to improve the patient experience. Dedicating time and effort to improving the patient experience is consistent with our professional commitment to comfort, empathize, and partner with our patients. Though patient satisfaction itself is an outcome worthy of our attention, it is also positively associated with measures related to patient safety and clinical effectiveness.[1, 2] Moreover, patient satisfaction is the only publicly reported measure that represents the patient's voice,[3] and accounts for a substantial portion of the Centers for Medicare and Medicaid Services payment adjustments under the Hospital Value Based Purchasing Program.[4]
However, all healthcare professionals should understand some key fundamental issues related to the measurement of patient satisfaction. The survey from which data are publicly reported and used for hospital payment adjustment is the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey, developed by the Agency for Healthcare Research and Quality.[5, 6] HCAHPS is sent to a random sample of 40% of hospitalized patients between 48 hours and 6 weeks after discharge. The HCAHPS survey uses ordinal response scales (eg, never, sometimes, usually, always) that generate highly skewed results toward favorable responses. Therefore, results are reported as the percent top box (ie, the percentage of responses in the most favorable category) rather than as a median score. The skewed distribution of results indicates that most patients are generally satisfied with care (ie, most respondents do not have an axe to grind), but also makes meaningful improvement difficult to achieve. Prior to public reporting and determination of effect on hospital payment, results are adjusted for mode of survey administration and patient mix. The same is not true when patient satisfaction data are used for internal purposes. Hospital leaders typically do not perform statistical adjustment and therefore need to be careful not to make apples‐to‐orangestype comparisons. For example, obstetric patient satisfaction scores should not be compared to general medical patient satisfaction scores, as these populations tend to rate satisfaction differently.
The HCAHPS survey questions are organized into domains of care, including satisfaction with nurses and satisfaction with doctors. Importantly, other healthcare team members may influence patients' perception in these domains. For example, a patient responding to nurse communication questions may also reflect on experiences with patient care technicians, social workers, and therapists. A patient responding to physician communication questions might also reflect on experiences with advanced practice providers. A common mistake is the practice of attributing satisfaction with doctors to the individual who served as the discharge physician. Many readers have likely seen patient satisfaction reports broken out by discharge physician with the expectation that giving this information to individual physicians will serve as useful formative feedback. The reality is that patients see many doctors during a hospitalization. To illustrate this point, we analyzed data from 420 patients admitted to our nonteaching hospitalist service who had completed an HCAHPS survey in 2014. We found that the discharge hospitalist accounted for only 34% of all physician encounters. Furthermore, research has shown that patients' experiences with specialist physicians also have a strong influence on their overall satisfaction with physicians.[7]
Having reliable patient satisfaction data on specific individuals would be a truly powerful formative assessment tool. In this issue of the Journal of Hospital Medicine, Banka and colleagues report on an impressive approach incorporating such a tool to give constructive feedback to physicians.[8] Since 2006, the study site had administered surveys to hospitalized patients that assess their satisfaction with specific resident physicians.[9] However, residency programs only reviewed the survey results with resident physicians about twice a year. The multifaceted intervention developed by Banka and colleagues included directly emailing the survey results to internal medicine resident physicians in real time while they were in service, a 1‐hour conference on best communication practices, and a reward program in which 3 residents were identified monthly to receive department‐wide recognition via email and a generous movie package. Using difference‐in‐differences regression analysis, the investigators compared changes in patient satisfaction results for internal medicine residents to results for residents from other specialties (who were not part of the intervention). The percentage of patients who gave top box responses to all 3 physician‐related questions and to the overall hospital rating was significantly higher for the internal medicine residents.
The findings from this study are important, because no prior study of an intervention, to our knowledge, has shown a significant improvement in patient satisfaction scores. In this study, feedback was believed to be the most powerful factor. The importance of meaningful, timely feedback in medical education is well recognized.[10] Without feedback there is poor insight into how intended results from specific actions compare with actual results. When feedback is lacking from external sources (in this case the voice of the patient), an uncontested sense of mastery develops, allowing mistakes to go uncorrected. This false sense of mastery contributes to an emotional and defensive response when performance is finally revealed to be less than optimal. The simple act of giving more timely feedback in this study encouraged self‐motivated reflection and practice change aimed at improving patient satisfaction, with remarkable results.
The study should inspire physician leaders from various hospital settings, and researchers, to develop and evaluate similar programs to improve patient satisfaction. We agree with the investigators that the approach should be multifaceted. Feedback to specific physicians is a powerful motivator, but needs to be combined with strategies to enhance communication skills. Brief conferences are less likely to have a lasting impact on behaviors than strategies like coaching and simulation based training.[11] Interventions should include recognition and reward to acknowledge exceptional performance and build friendly competition.
The biggest challenge to adopting an intervention such as the one used in the Banka study relates to the feasibility of implementing physician‐specific patient satisfaction reporting. Several survey instruments are available for use as tools to assess satisfaction with specific physicians.[9, 12, 13] However, who will administer these instruments? Most hospitals do not have undergraduate students available. Hospitals could use their volunteers, but this is not likely to be a sustainable solution. Hospitals could consider administering the survey via email, but many hospitals are just starting to collect patient email addresses and many patients do not use email. Once data are collected, who will conduct analyses and create comparative reports? Press Ganey recently developed a survey assessing satisfaction with specific hospitalists, using photographs, and offers the ability to create comparative reports.[14] Their service addresses the analytic challenge, but the quandary of survey administration remains.
In conclusion, we encourage hospital medicine leaders to develop and evaluate multifaceted interventions to improve patient satisfaction such as the one reported by Banka et al. Timely, specific feedback to physicians is an essential feature. The collection of physician‐specific data is a major challenge, but not an insurmountable one. Novel use of personnel and/or technology is likely to play a role in these efforts.
Disclosure: Nothing to report.
Patient satisfaction has received increased attention in recent years, which we believe is well deserved and long overdue. Anyone who has been hospitalized, or has had a loved one hospitalized, can appreciate that there is room to improve the patient experience. Dedicating time and effort to improving the patient experience is consistent with our professional commitment to comfort, empathize, and partner with our patients. Though patient satisfaction itself is an outcome worthy of our attention, it is also positively associated with measures related to patient safety and clinical effectiveness.[1, 2] Moreover, patient satisfaction is the only publicly reported measure that represents the patient's voice,[3] and accounts for a substantial portion of the Centers for Medicare and Medicaid Services payment adjustments under the Hospital Value Based Purchasing Program.[4]
However, all healthcare professionals should understand some key fundamental issues related to the measurement of patient satisfaction. The survey from which data are publicly reported and used for hospital payment adjustment is the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey, developed by the Agency for Healthcare Research and Quality.[5, 6] HCAHPS is sent to a random sample of 40% of hospitalized patients between 48 hours and 6 weeks after discharge. The HCAHPS survey uses ordinal response scales (eg, never, sometimes, usually, always) that generate highly skewed results toward favorable responses. Therefore, results are reported as the percent top box (ie, the percentage of responses in the most favorable category) rather than as a median score. The skewed distribution of results indicates that most patients are generally satisfied with care (ie, most respondents do not have an axe to grind), but also makes meaningful improvement difficult to achieve. Prior to public reporting and determination of effect on hospital payment, results are adjusted for mode of survey administration and patient mix. The same is not true when patient satisfaction data are used for internal purposes. Hospital leaders typically do not perform statistical adjustment and therefore need to be careful not to make apples‐to‐orangestype comparisons. For example, obstetric patient satisfaction scores should not be compared to general medical patient satisfaction scores, as these populations tend to rate satisfaction differently.
The HCAHPS survey questions are organized into domains of care, including satisfaction with nurses and satisfaction with doctors. Importantly, other healthcare team members may influence patients' perception in these domains. For example, a patient responding to nurse communication questions may also reflect on experiences with patient care technicians, social workers, and therapists. A patient responding to physician communication questions might also reflect on experiences with advanced practice providers. A common mistake is the practice of attributing satisfaction with doctors to the individual who served as the discharge physician. Many readers have likely seen patient satisfaction reports broken out by discharge physician with the expectation that giving this information to individual physicians will serve as useful formative feedback. The reality is that patients see many doctors during a hospitalization. To illustrate this point, we analyzed data from 420 patients admitted to our nonteaching hospitalist service who had completed an HCAHPS survey in 2014. We found that the discharge hospitalist accounted for only 34% of all physician encounters. Furthermore, research has shown that patients' experiences with specialist physicians also have a strong influence on their overall satisfaction with physicians.[7]
Having reliable patient satisfaction data on specific individuals would be a truly powerful formative assessment tool. In this issue of the Journal of Hospital Medicine, Banka and colleagues report on an impressive approach incorporating such a tool to give constructive feedback to physicians.[8] Since 2006, the study site had administered surveys to hospitalized patients that assess their satisfaction with specific resident physicians.[9] However, residency programs only reviewed the survey results with resident physicians about twice a year. The multifaceted intervention developed by Banka and colleagues included directly emailing the survey results to internal medicine resident physicians in real time while they were in service, a 1‐hour conference on best communication practices, and a reward program in which 3 residents were identified monthly to receive department‐wide recognition via email and a generous movie package. Using difference‐in‐differences regression analysis, the investigators compared changes in patient satisfaction results for internal medicine residents to results for residents from other specialties (who were not part of the intervention). The percentage of patients who gave top box responses to all 3 physician‐related questions and to the overall hospital rating was significantly higher for the internal medicine residents.
The findings from this study are important, because no prior study of an intervention, to our knowledge, has shown a significant improvement in patient satisfaction scores. In this study, feedback was believed to be the most powerful factor. The importance of meaningful, timely feedback in medical education is well recognized.[10] Without feedback there is poor insight into how intended results from specific actions compare with actual results. When feedback is lacking from external sources (in this case the voice of the patient), an uncontested sense of mastery develops, allowing mistakes to go uncorrected. This false sense of mastery contributes to an emotional and defensive response when performance is finally revealed to be less than optimal. The simple act of giving more timely feedback in this study encouraged self‐motivated reflection and practice change aimed at improving patient satisfaction, with remarkable results.
The study should inspire physician leaders from various hospital settings, and researchers, to develop and evaluate similar programs to improve patient satisfaction. We agree with the investigators that the approach should be multifaceted. Feedback to specific physicians is a powerful motivator, but needs to be combined with strategies to enhance communication skills. Brief conferences are less likely to have a lasting impact on behaviors than strategies like coaching and simulation based training.[11] Interventions should include recognition and reward to acknowledge exceptional performance and build friendly competition.
The biggest challenge to adopting an intervention such as the one used in the Banka study relates to the feasibility of implementing physician‐specific patient satisfaction reporting. Several survey instruments are available for use as tools to assess satisfaction with specific physicians.[9, 12, 13] However, who will administer these instruments? Most hospitals do not have undergraduate students available. Hospitals could use their volunteers, but this is not likely to be a sustainable solution. Hospitals could consider administering the survey via email, but many hospitals are just starting to collect patient email addresses and many patients do not use email. Once data are collected, who will conduct analyses and create comparative reports? Press Ganey recently developed a survey assessing satisfaction with specific hospitalists, using photographs, and offers the ability to create comparative reports.[14] Their service addresses the analytic challenge, but the quandary of survey administration remains.
In conclusion, we encourage hospital medicine leaders to develop and evaluate multifaceted interventions to improve patient satisfaction such as the one reported by Banka et al. Timely, specific feedback to physicians is an essential feature. The collection of physician‐specific data is a major challenge, but not an insurmountable one. Novel use of personnel and/or technology is likely to play a role in these efforts.
Disclosure: Nothing to report.
- A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013;3(1). , , .
- Patients' perception of hospital care in the United States. N Engl J Med. 2008;359(18):1921–1931. , , , .
- Medicare.gov. Hospital Compare. Available at: http://www.medicare.gov/hospitalcompare/search.html. Accessed April 27, 2015.
- Centers for Medicare 67(1):27–37.
- Who's behind an HCAHPS score? Jt Comm J Qual Patient Saf. 2011;37(10):461–468. , , , et al.
- Improving patient satisfaction through physician education, feedback, and incentives. J Hosp Med. 2015;10(8):497–502. , , , et al.
- Promoting patient‐centred care through trainee feedback: assessing residents' C‐I‐CARE (ARC) program. BMJ Qual Saf. 2012;21(3):225–233. , , , .
- Feedback in clinical medical education. JAMA. 1983;250(6):777–781. .
- Impact of hospitalist communication‐skills training on patient‐satisfaction scores. J Hosp Med. 2013;8(6):315–320. , , , .
- Assessing patient perceptions of hospitalist communication skills using the Communication Assessment Tool (CAT). J Hosp Med. 2010;5(9):522–527. , , , , , .
- Development and validation of the tool to assess inpatient satisfaction with care from hospitalists. J Hosp Med. 2014;9(9):553–558. , , , , , .
- Press Ganey. A true performance solution for hospitalists. Available at: http://www.pressganey.com/ourSolutions/patient‐voice/census‐based‐surveying/hospitalist.aspx. Accessed April 27, 2015.
- A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013;3(1). , , .
- Patients' perception of hospital care in the United States. N Engl J Med. 2008;359(18):1921–1931. , , , .
- Medicare.gov. Hospital Compare. Available at: http://www.medicare.gov/hospitalcompare/search.html. Accessed April 27, 2015.
- Centers for Medicare 67(1):27–37.
- Who's behind an HCAHPS score? Jt Comm J Qual Patient Saf. 2011;37(10):461–468. , , , et al.
- Improving patient satisfaction through physician education, feedback, and incentives. J Hosp Med. 2015;10(8):497–502. , , , et al.
- Promoting patient‐centred care through trainee feedback: assessing residents' C‐I‐CARE (ARC) program. BMJ Qual Saf. 2012;21(3):225–233. , , , .
- Feedback in clinical medical education. JAMA. 1983;250(6):777–781. .
- Impact of hospitalist communication‐skills training on patient‐satisfaction scores. J Hosp Med. 2013;8(6):315–320. , , , .
- Assessing patient perceptions of hospitalist communication skills using the Communication Assessment Tool (CAT). J Hosp Med. 2010;5(9):522–527. , , , , , .
- Development and validation of the tool to assess inpatient satisfaction with care from hospitalists. J Hosp Med. 2014;9(9):553–558. , , , , , .
- Press Ganey. A true performance solution for hospitalists. Available at: http://www.pressganey.com/ourSolutions/patient‐voice/census‐based‐surveying/hospitalist.aspx. Accessed April 27, 2015.