Article Type
Changed
Fri, 09/14/2018 - 12:38
Display Headline
A Performance Metrics Primer

Hospitalists are no strangers to performance measurement. Every day, their performance is measured, formally and informally, by their sponsoring organizations, by third-party payers, and by patients.

But many hospitalists are not engaged in producing or reviewing that performance data.

“Historically, hospitalist groups have relied on the hospital to collect the data and present it to them—and still do, to a great extent, even today,” says Marc B. Westle, DO, FACP, president and managing partner for a large private hospital medicine group (HMG), Asheville Hospitalist Group in North Carolina.

This often puts hospitalists at a disadvantage, says Dr. Westle. If hospitalist groups don’t get involved with data reporting and analysis, they can’t have meaningful discussions with their hospitals.

With a background in hospital administration, Leslie Flores, MHA, co-principal of Nelson/Flores Associates, LLC, is well acquainted with the challenges of collecting and reporting hospital data. Through her consulting work with partner John Nelson, MD, she has found that sponsoring organizations often don’t review performance data with hospitalists. Hospitalists may examine their performance one way, while the hospital uses a different set of metrics, or analytical techniques. This disconnect, she notes, “leads to differences in interpretations and understandings that can occur between the hospital folks and the doctors when they try to present information.”

A new white paper produced by SHM’s Benchmarks Committee, “Measuring Hospitalist Performance: Metrics, Reports, and Dashboards,” aims to change these scenarios by encouraging hospitalists to take charge of their performance reporting. Geared to multiple levels of expertise with performance metrics, the white paper offers “some real, practical advice as to how you capture this information and then how you look at it,” says Joe Miller, SHM senior vice president and staff liaison to the Benchmarks Committee.

It may seem overwhelming at first to do an all-encompassing dashboard, but even if you pick just a couple things to start with, this puts down on paper what your worth is. When you can point to how your services are improving or maintaining over time, that’s the picture that says a thousand words.

— Daniel Rauch, MD, FAAP

Select a Metric

The Benchmarks Committee used a Delphi process to rank the importance of various metrics and produced a list of 10 on which they would focus. The clearly written introduction walks readers through a step-by-step process intended to help HMGs decide which performance metrics they will measure.

Flores, editor of the white paper project, cautions that the “magic 10” metrics selected by the committee don’t necessarily represent the most important metrics for each practice. “We wanted to stimulate hospitalists to think about how they view their own performance and to create a common language and understanding of what some key issues and expectations should be for hospitalists’ performance monitoring,” she says. “They can use this document as a starting point and then come up with performance metrics that really matter to their practice.”

Choosing metrics to measure and report on for the hospitalist service will depend on a variety of variables particular to that group, including:

  • The HMG’s original mission;
  • The expectations of the hospital or other sponsoring organization (such as a multispecialty group) for the return on their investment;
  • Key outcomes and/or performance measures sought by payers, regulators, and other stakeholders; and
  • The practice’s high-priority issues.

Regarding the last item, Flores recalls one HMG that decided to include on its dashboard a survey of how it used consulting physicians from the community. This component was chosen to address the concerns of other specialists in the community, who feared the hospitalists were using only their own medical group’s specialists for consultations.

 

 

To further guide choices of metrics, the white paper uses a uniform template to organize each section. Whether the metric is descriptive (volume, data, case mix), operational (hospital cost, productivity, provider satisfaction, length of stay, patient satisfaction), or clinical (mortality data, readmission rate, JCAHO core measures), the user finds a description in each section titled, “Why this metric is important.”

Daniel Rauch, MD, FAAP, explains why a pediatric hospitalist group might choose to focus on referring provider satisfaction rather than volume data—perhaps a more critical metric for adult hospitalist groups.

“Our volume data [a descriptive metric] will depend on who’s referring to us and the availability of subspecialists, as opposed to market share and the notability of the institution in the local environment,” he notes.

Dr. Rauch, director of the Pediatric Hospitalist Program at New York University School of Medicine in New York City and editor of the Provider Satisfaction section of the white paper, co-presented the pediatric hospitalist perspective on the white paper with Flores at the Annual Meeting.

Much more critical to the success of a pediatric hospitalist service is nurturing relationships with local pediatricians, who traditionally want to retain their ability to manage patients under all circumstances. As a result, the pediatric hospitalist group might choose to survey its referring providers to learn how it can provide better service and take advantage of positive survey responses to market its service. (These interventions are outlined in “Performance Metric Seven: Provider Satisfaction.”)

Finding the Data

Once a group has selected its performance metrics, it faces many logistical and political challenges to obtain the pertinent data. Again, the white paper’s template furnishes clear direction on data sources for each metric.

To begin, hospitalists must understand their practicing environment. Many smaller rural or freestanding hospitals do not have the IT decision-support resources to generate customized reports for hospitalists. “For instance, the hospital may be able to furnish information about length of stay for the hospital in general, but [may] not [be able] to break out LOS numbers for the hospitalist group compared to other physicians,” explains Flores. In addition, some billing services can’t or won’t provide information on volume, charges, and collections to the hospitalist group.

“The other challenge is more of a cultural or philosophical one,” says Flores. “Very often, hospitals or other sponsoring entities are reluctant to share financial information, in particular, with the hospitalists, because they are afraid that the hospitalists will use the information inappropriately—or that they’ll somehow become more powerful by virtue of having that information. And, in fact, that’s what we really want: to be more powerful—but in a constructive, positive way.”

In this case, HMGs may need to invest time to ensure organizations that the information won’t be used against them and that its only goal is to improve practice performance.

“Finding the data is not always easy,” concedes Burke T. Kealey, MD, assistant medical director of hospital medicine for HealthPartners Medical Group in St. Paul, Minn., and chair of SHM’s Benchmarks Committee. “Some organizations can give you a lot of these data sets pretty easily, and some are not going to produce many of them at all. And, when you cross organizational boundaries, there are political considerations. For example, if you’re a national hospitalist company trying to get data from individual hospitals, it might be difficult.” (Dr. Kealey co-presented at the workshop on the white paper for adult HMGs with Flores at the 2007 SHM Annual Meeting in Dallas.)

Sources of data will vary from metric to metric. To obtain data for measuring volume (often used as an indicator for staffing requirements and scheduling), hospitalists need to access hospital admission/discharge/transfer systems, health-plan data systems, or the hospital medicine service billing system. For an operational metric like provider satisfaction, the hospitalist group may have to float its own referring provider survey (by mail, by phone, or in person) to gain understanding of how it is viewed by referring physicians.

 

 

How to Interpret the Data

Obtaining the data is only half the battle. Another core tool in the white paper is the template section “Unique Measurement and Analysis Considerations,” which guides hospitalists as they attempt to verify the validity of their data and ensure valid comparisons.

Dr. Westle’s group has studiously tracked its performance metrics for years; other groups may have little experience in this domain. Another critical step in creating dashboard reports, he states, is understanding how the data are collected and ensuring the data are accurate and attributed appropriately.

“The way clinical cases are coded ought to be the subject of some concern and scrutiny,” says John Novotny, MD, director of the Section of Hospital Medicine of the Allen Division at Columbia University Medical Center in New York City and another Benchmarks Committee member. “There may be a natural inclination to accept the performance information provided to us by the hospital, but the processes that generated these data need to be well understood to gauge the accuracy and acceptability of any conclusions drawn.”

With a background in statistics and information technology, Dr. Novotny cautions that “some assessment of the validity of comparisons within or between groups or to benchmark figures should be included in every analysis or report—to justify any conclusions drawn and to avoid the statistical pitfalls common to these data.”

He advises HMGs to run the numbers by someone with expertise in data interpretation, especially before reports are published or submitted for public review. These issues come up frequently in the analysis of frequency data, such as the number of deaths occurring in a group for a particular diagnosis over a period of time, where the numbers might be relatively small.

For example, if five deaths are observed in a subset of 20 patients, the statistic of a 25% death rate comes with such low precision that the true underlying death rate might fall anywhere between 8% and 50%.

“This is a limitation inherent in drawing conclusions from relatively small data sets, akin to driving down a narrow highway with a very loose steering wheel—avoiding the ravines is a challenge,” he says.

Dr. Novotny contributed the section on mortality metrics for the white paper. Although a group’s raw mortality data may be easily obtained, “HMGs should be wary of the smaller numbers resulting from stratifying the data by service, DRG [diagnosis-related group], or time periods,” he explains.

Instead, as suggested in the “Interventions” section, the HMG might want to take the additional approach of documenting the use of processes thought to have a positive impact on the risk of mortality in hospitalized patients. Potentially useful processes under development and discussion in the literature include interdisciplinary rounds, effective inter-provider communication, and ventilator care protocols, among others.

“We need to show that not only do we track our mortality figures, we analyze and respond to them by improving our patient care,” Dr. Novotny says. “We need to show that we’re making patient care safer.”

At the Ochsner Health Center in New Orleans, the HMG decided to track readmission rates for congestive heart failure—the primary DRG for inpatient care, and compare its rates with those of other services. Because heart failure is traditionally the bailiwick of cardiology, “you might think that the cardiology service would have the best outcomes,” says Steven Deitelzweig, MD, vice president of medical affairs and system chairman.

But, using order sets that align with JCAHO standards and best care as demonstrated by evidence in cardiology, Dr. Deitelzweig’s hospitalist group “was able to demonstrate statistically and objectively that our outcomes were better, adjusting for case mix.”

 

 

Make Your Own Case

Once the infrastructure for tracking and reporting productivity is in place, hospitalists can use performance metrics to build their own case, remarks Dr. Kealey. The white paper furnishes several examples of customized dashboards. Some use a visual display to illustrate improvement or maintenance in key performance areas.

Dr. Westle notes that metrics reports can be used in a variety of ways, including:

  • Negotiating with the hospital;
  • Managing a practice internally (i.e., tracking the productivity of established and new full-time equivalent employees (FTEs) and compensating physicians for their productivity); and
  • Negotiating with third-party payers who increasingly rely on pay-for-performance measures. For instance, Dr. Westle says, if a group can track its cost per case for the top 15 DRGs and show those costs are less than the national average, this “puts the hospitalist group at a significant advantage when talking to insurance companies about pay for performance.”

Dr. Deitelzweig reports that his HMG at the Ochsner Health Center posts monthly updates of its dashboard results in the halls of its department and others. “Whether it’s readmission rates, patient satisfaction, or hand washing, it’s up there for all to see,” he says. He believes that this type of transparency is not only a good reminder for staff but benefits patients, as well. “It’s helpful because it highlights for your department members the goals of the department and that those are aligned with patient satisfaction and best outcomes.”

Conclusion

“If hospitalists can work with their hospitals to understand how various data elements are defined, collected and reported,” says Flores, “this will enable them to develop a greater understanding of what the information means, correct any misinterpretations on the hospital’s part, and gain a greater confidence in the information’s credibility and reliability. Hospitalists should work closely with their sponsoring organizations to define metrics and reports that are mutually credible and meaningful, so that all parties are looking at the same things and understanding them the same way.”

Participating in the white paper project gave Dr. Rauch a better appreciation of the value of measuring performance. His advice to first-timers: “It may seem overwhelming at first to do an all-encompassing dashboard, but even if you pick just a couple things to start with, this puts down on paper what your worth is. When you can point to how your services are improving or maintaining over time, that’s the picture that says a thousand words.” TH

Gretchen Henkel is a frequent contributor to The Hospitalist.

Issue
The Hospitalist - 2007(06)
Publications
Sections

Hospitalists are no strangers to performance measurement. Every day, their performance is measured, formally and informally, by their sponsoring organizations, by third-party payers, and by patients.

But many hospitalists are not engaged in producing or reviewing that performance data.

“Historically, hospitalist groups have relied on the hospital to collect the data and present it to them—and still do, to a great extent, even today,” says Marc B. Westle, DO, FACP, president and managing partner for a large private hospital medicine group (HMG), Asheville Hospitalist Group in North Carolina.

This often puts hospitalists at a disadvantage, says Dr. Westle. If hospitalist groups don’t get involved with data reporting and analysis, they can’t have meaningful discussions with their hospitals.

With a background in hospital administration, Leslie Flores, MHA, co-principal of Nelson/Flores Associates, LLC, is well acquainted with the challenges of collecting and reporting hospital data. Through her consulting work with partner John Nelson, MD, she has found that sponsoring organizations often don’t review performance data with hospitalists. Hospitalists may examine their performance one way, while the hospital uses a different set of metrics, or analytical techniques. This disconnect, she notes, “leads to differences in interpretations and understandings that can occur between the hospital folks and the doctors when they try to present information.”

A new white paper produced by SHM’s Benchmarks Committee, “Measuring Hospitalist Performance: Metrics, Reports, and Dashboards,” aims to change these scenarios by encouraging hospitalists to take charge of their performance reporting. Geared to multiple levels of expertise with performance metrics, the white paper offers “some real, practical advice as to how you capture this information and then how you look at it,” says Joe Miller, SHM senior vice president and staff liaison to the Benchmarks Committee.

It may seem overwhelming at first to do an all-encompassing dashboard, but even if you pick just a couple things to start with, this puts down on paper what your worth is. When you can point to how your services are improving or maintaining over time, that’s the picture that says a thousand words.

— Daniel Rauch, MD, FAAP

Select a Metric

The Benchmarks Committee used a Delphi process to rank the importance of various metrics and produced a list of 10 on which they would focus. The clearly written introduction walks readers through a step-by-step process intended to help HMGs decide which performance metrics they will measure.

Flores, editor of the white paper project, cautions that the “magic 10” metrics selected by the committee don’t necessarily represent the most important metrics for each practice. “We wanted to stimulate hospitalists to think about how they view their own performance and to create a common language and understanding of what some key issues and expectations should be for hospitalists’ performance monitoring,” she says. “They can use this document as a starting point and then come up with performance metrics that really matter to their practice.”

Choosing metrics to measure and report on for the hospitalist service will depend on a variety of variables particular to that group, including:

  • The HMG’s original mission;
  • The expectations of the hospital or other sponsoring organization (such as a multispecialty group) for the return on their investment;
  • Key outcomes and/or performance measures sought by payers, regulators, and other stakeholders; and
  • The practice’s high-priority issues.

Regarding the last item, Flores recalls one HMG that decided to include on its dashboard a survey of how it used consulting physicians from the community. This component was chosen to address the concerns of other specialists in the community, who feared the hospitalists were using only their own medical group’s specialists for consultations.

 

 

To further guide choices of metrics, the white paper uses a uniform template to organize each section. Whether the metric is descriptive (volume, data, case mix), operational (hospital cost, productivity, provider satisfaction, length of stay, patient satisfaction), or clinical (mortality data, readmission rate, JCAHO core measures), the user finds a description in each section titled, “Why this metric is important.”

Daniel Rauch, MD, FAAP, explains why a pediatric hospitalist group might choose to focus on referring provider satisfaction rather than volume data—perhaps a more critical metric for adult hospitalist groups.

“Our volume data [a descriptive metric] will depend on who’s referring to us and the availability of subspecialists, as opposed to market share and the notability of the institution in the local environment,” he notes.

Dr. Rauch, director of the Pediatric Hospitalist Program at New York University School of Medicine in New York City and editor of the Provider Satisfaction section of the white paper, co-presented the pediatric hospitalist perspective on the white paper with Flores at the Annual Meeting.

Much more critical to the success of a pediatric hospitalist service is nurturing relationships with local pediatricians, who traditionally want to retain their ability to manage patients under all circumstances. As a result, the pediatric hospitalist group might choose to survey its referring providers to learn how it can provide better service and take advantage of positive survey responses to market its service. (These interventions are outlined in “Performance Metric Seven: Provider Satisfaction.”)

Finding the Data

Once a group has selected its performance metrics, it faces many logistical and political challenges to obtain the pertinent data. Again, the white paper’s template furnishes clear direction on data sources for each metric.

To begin, hospitalists must understand their practicing environment. Many smaller rural or freestanding hospitals do not have the IT decision-support resources to generate customized reports for hospitalists. “For instance, the hospital may be able to furnish information about length of stay for the hospital in general, but [may] not [be able] to break out LOS numbers for the hospitalist group compared to other physicians,” explains Flores. In addition, some billing services can’t or won’t provide information on volume, charges, and collections to the hospitalist group.

“The other challenge is more of a cultural or philosophical one,” says Flores. “Very often, hospitals or other sponsoring entities are reluctant to share financial information, in particular, with the hospitalists, because they are afraid that the hospitalists will use the information inappropriately—or that they’ll somehow become more powerful by virtue of having that information. And, in fact, that’s what we really want: to be more powerful—but in a constructive, positive way.”

In this case, HMGs may need to invest time to ensure organizations that the information won’t be used against them and that its only goal is to improve practice performance.

“Finding the data is not always easy,” concedes Burke T. Kealey, MD, assistant medical director of hospital medicine for HealthPartners Medical Group in St. Paul, Minn., and chair of SHM’s Benchmarks Committee. “Some organizations can give you a lot of these data sets pretty easily, and some are not going to produce many of them at all. And, when you cross organizational boundaries, there are political considerations. For example, if you’re a national hospitalist company trying to get data from individual hospitals, it might be difficult.” (Dr. Kealey co-presented at the workshop on the white paper for adult HMGs with Flores at the 2007 SHM Annual Meeting in Dallas.)

Sources of data will vary from metric to metric. To obtain data for measuring volume (often used as an indicator for staffing requirements and scheduling), hospitalists need to access hospital admission/discharge/transfer systems, health-plan data systems, or the hospital medicine service billing system. For an operational metric like provider satisfaction, the hospitalist group may have to float its own referring provider survey (by mail, by phone, or in person) to gain understanding of how it is viewed by referring physicians.

 

 

How to Interpret the Data

Obtaining the data is only half the battle. Another core tool in the white paper is the template section “Unique Measurement and Analysis Considerations,” which guides hospitalists as they attempt to verify the validity of their data and ensure valid comparisons.

Dr. Westle’s group has studiously tracked its performance metrics for years; other groups may have little experience in this domain. Another critical step in creating dashboard reports, he states, is understanding how the data are collected and ensuring the data are accurate and attributed appropriately.

“The way clinical cases are coded ought to be the subject of some concern and scrutiny,” says John Novotny, MD, director of the Section of Hospital Medicine of the Allen Division at Columbia University Medical Center in New York City and another Benchmarks Committee member. “There may be a natural inclination to accept the performance information provided to us by the hospital, but the processes that generated these data need to be well understood to gauge the accuracy and acceptability of any conclusions drawn.”

With a background in statistics and information technology, Dr. Novotny cautions that “some assessment of the validity of comparisons within or between groups or to benchmark figures should be included in every analysis or report—to justify any conclusions drawn and to avoid the statistical pitfalls common to these data.”

He advises HMGs to run the numbers by someone with expertise in data interpretation, especially before reports are published or submitted for public review. These issues come up frequently in the analysis of frequency data, such as the number of deaths occurring in a group for a particular diagnosis over a period of time, where the numbers might be relatively small.

For example, if five deaths are observed in a subset of 20 patients, the statistic of a 25% death rate comes with such low precision that the true underlying death rate might fall anywhere between 8% and 50%.

“This is a limitation inherent in drawing conclusions from relatively small data sets, akin to driving down a narrow highway with a very loose steering wheel—avoiding the ravines is a challenge,” he says.

Dr. Novotny contributed the section on mortality metrics for the white paper. Although a group’s raw mortality data may be easily obtained, “HMGs should be wary of the smaller numbers resulting from stratifying the data by service, DRG [diagnosis-related group], or time periods,” he explains.

Instead, as suggested in the “Interventions” section, the HMG might want to take the additional approach of documenting the use of processes thought to have a positive impact on the risk of mortality in hospitalized patients. Potentially useful processes under development and discussion in the literature include interdisciplinary rounds, effective inter-provider communication, and ventilator care protocols, among others.

“We need to show that not only do we track our mortality figures, we analyze and respond to them by improving our patient care,” Dr. Novotny says. “We need to show that we’re making patient care safer.”

At the Ochsner Health Center in New Orleans, the HMG decided to track readmission rates for congestive heart failure—the primary DRG for inpatient care, and compare its rates with those of other services. Because heart failure is traditionally the bailiwick of cardiology, “you might think that the cardiology service would have the best outcomes,” says Steven Deitelzweig, MD, vice president of medical affairs and system chairman.

But, using order sets that align with JCAHO standards and best care as demonstrated by evidence in cardiology, Dr. Deitelzweig’s hospitalist group “was able to demonstrate statistically and objectively that our outcomes were better, adjusting for case mix.”

 

 

Make Your Own Case

Once the infrastructure for tracking and reporting productivity is in place, hospitalists can use performance metrics to build their own case, remarks Dr. Kealey. The white paper furnishes several examples of customized dashboards. Some use a visual display to illustrate improvement or maintenance in key performance areas.

Dr. Westle notes that metrics reports can be used in a variety of ways, including:

  • Negotiating with the hospital;
  • Managing a practice internally (i.e., tracking the productivity of established and new full-time equivalent employees (FTEs) and compensating physicians for their productivity); and
  • Negotiating with third-party payers who increasingly rely on pay-for-performance measures. For instance, Dr. Westle says, if a group can track its cost per case for the top 15 DRGs and show those costs are less than the national average, this “puts the hospitalist group at a significant advantage when talking to insurance companies about pay for performance.”

Dr. Deitelzweig reports that his HMG at the Ochsner Health Center posts monthly updates of its dashboard results in the halls of its department and others. “Whether it’s readmission rates, patient satisfaction, or hand washing, it’s up there for all to see,” he says. He believes that this type of transparency is not only a good reminder for staff but benefits patients, as well. “It’s helpful because it highlights for your department members the goals of the department and that those are aligned with patient satisfaction and best outcomes.”

Conclusion

“If hospitalists can work with their hospitals to understand how various data elements are defined, collected and reported,” says Flores, “this will enable them to develop a greater understanding of what the information means, correct any misinterpretations on the hospital’s part, and gain a greater confidence in the information’s credibility and reliability. Hospitalists should work closely with their sponsoring organizations to define metrics and reports that are mutually credible and meaningful, so that all parties are looking at the same things and understanding them the same way.”

Participating in the white paper project gave Dr. Rauch a better appreciation of the value of measuring performance. His advice to first-timers: “It may seem overwhelming at first to do an all-encompassing dashboard, but even if you pick just a couple things to start with, this puts down on paper what your worth is. When you can point to how your services are improving or maintaining over time, that’s the picture that says a thousand words.” TH

Gretchen Henkel is a frequent contributor to The Hospitalist.

Hospitalists are no strangers to performance measurement. Every day, their performance is measured, formally and informally, by their sponsoring organizations, by third-party payers, and by patients.

But many hospitalists are not engaged in producing or reviewing that performance data.

“Historically, hospitalist groups have relied on the hospital to collect the data and present it to them—and still do, to a great extent, even today,” says Marc B. Westle, DO, FACP, president and managing partner for a large private hospital medicine group (HMG), Asheville Hospitalist Group in North Carolina.

This often puts hospitalists at a disadvantage, says Dr. Westle. If hospitalist groups don’t get involved with data reporting and analysis, they can’t have meaningful discussions with their hospitals.

With a background in hospital administration, Leslie Flores, MHA, co-principal of Nelson/Flores Associates, LLC, is well acquainted with the challenges of collecting and reporting hospital data. Through her consulting work with partner John Nelson, MD, she has found that sponsoring organizations often don’t review performance data with hospitalists. Hospitalists may examine their performance one way, while the hospital uses a different set of metrics, or analytical techniques. This disconnect, she notes, “leads to differences in interpretations and understandings that can occur between the hospital folks and the doctors when they try to present information.”

A new white paper produced by SHM’s Benchmarks Committee, “Measuring Hospitalist Performance: Metrics, Reports, and Dashboards,” aims to change these scenarios by encouraging hospitalists to take charge of their performance reporting. Geared to multiple levels of expertise with performance metrics, the white paper offers “some real, practical advice as to how you capture this information and then how you look at it,” says Joe Miller, SHM senior vice president and staff liaison to the Benchmarks Committee.

It may seem overwhelming at first to do an all-encompassing dashboard, but even if you pick just a couple things to start with, this puts down on paper what your worth is. When you can point to how your services are improving or maintaining over time, that’s the picture that says a thousand words.

— Daniel Rauch, MD, FAAP

Select a Metric

The Benchmarks Committee used a Delphi process to rank the importance of various metrics and produced a list of 10 on which they would focus. The clearly written introduction walks readers through a step-by-step process intended to help HMGs decide which performance metrics they will measure.

Flores, editor of the white paper project, cautions that the “magic 10” metrics selected by the committee don’t necessarily represent the most important metrics for each practice. “We wanted to stimulate hospitalists to think about how they view their own performance and to create a common language and understanding of what some key issues and expectations should be for hospitalists’ performance monitoring,” she says. “They can use this document as a starting point and then come up with performance metrics that really matter to their practice.”

Choosing metrics to measure and report on for the hospitalist service will depend on a variety of variables particular to that group, including:

  • The HMG’s original mission;
  • The expectations of the hospital or other sponsoring organization (such as a multispecialty group) for the return on their investment;
  • Key outcomes and/or performance measures sought by payers, regulators, and other stakeholders; and
  • The practice’s high-priority issues.

Regarding the last item, Flores recalls one HMG that decided to include on its dashboard a survey of how it used consulting physicians from the community. This component was chosen to address the concerns of other specialists in the community, who feared the hospitalists were using only their own medical group’s specialists for consultations.

 

 

To further guide choices of metrics, the white paper uses a uniform template to organize each section. Whether the metric is descriptive (volume, data, case mix), operational (hospital cost, productivity, provider satisfaction, length of stay, patient satisfaction), or clinical (mortality data, readmission rate, JCAHO core measures), the user finds a description in each section titled, “Why this metric is important.”

Daniel Rauch, MD, FAAP, explains why a pediatric hospitalist group might choose to focus on referring provider satisfaction rather than volume data—perhaps a more critical metric for adult hospitalist groups.

“Our volume data [a descriptive metric] will depend on who’s referring to us and the availability of subspecialists, as opposed to market share and the notability of the institution in the local environment,” he notes.

Dr. Rauch, director of the Pediatric Hospitalist Program at New York University School of Medicine in New York City and editor of the Provider Satisfaction section of the white paper, co-presented the pediatric hospitalist perspective on the white paper with Flores at the Annual Meeting.

Much more critical to the success of a pediatric hospitalist service is nurturing relationships with local pediatricians, who traditionally want to retain their ability to manage patients under all circumstances. As a result, the pediatric hospitalist group might choose to survey its referring providers to learn how it can provide better service and take advantage of positive survey responses to market its service. (These interventions are outlined in “Performance Metric Seven: Provider Satisfaction.”)

Finding the Data

Once a group has selected its performance metrics, it faces many logistical and political challenges to obtain the pertinent data. Again, the white paper’s template furnishes clear direction on data sources for each metric.

To begin, hospitalists must understand their practicing environment. Many smaller rural or freestanding hospitals do not have the IT decision-support resources to generate customized reports for hospitalists. “For instance, the hospital may be able to furnish information about length of stay for the hospital in general, but [may] not [be able] to break out LOS numbers for the hospitalist group compared to other physicians,” explains Flores. In addition, some billing services can’t or won’t provide information on volume, charges, and collections to the hospitalist group.

“The other challenge is more of a cultural or philosophical one,” says Flores. “Very often, hospitals or other sponsoring entities are reluctant to share financial information, in particular, with the hospitalists, because they are afraid that the hospitalists will use the information inappropriately—or that they’ll somehow become more powerful by virtue of having that information. And, in fact, that’s what we really want: to be more powerful—but in a constructive, positive way.”

In this case, HMGs may need to invest time to ensure organizations that the information won’t be used against them and that its only goal is to improve practice performance.

“Finding the data is not always easy,” concedes Burke T. Kealey, MD, assistant medical director of hospital medicine for HealthPartners Medical Group in St. Paul, Minn., and chair of SHM’s Benchmarks Committee. “Some organizations can give you a lot of these data sets pretty easily, and some are not going to produce many of them at all. And, when you cross organizational boundaries, there are political considerations. For example, if you’re a national hospitalist company trying to get data from individual hospitals, it might be difficult.” (Dr. Kealey co-presented at the workshop on the white paper for adult HMGs with Flores at the 2007 SHM Annual Meeting in Dallas.)

Sources of data will vary from metric to metric. To obtain data for measuring volume (often used as an indicator for staffing requirements and scheduling), hospitalists need to access hospital admission/discharge/transfer systems, health-plan data systems, or the hospital medicine service billing system. For an operational metric like provider satisfaction, the hospitalist group may have to float its own referring provider survey (by mail, by phone, or in person) to gain understanding of how it is viewed by referring physicians.

 

 

How to Interpret the Data

Obtaining the data is only half the battle. Another core tool in the white paper is the template section “Unique Measurement and Analysis Considerations,” which guides hospitalists as they attempt to verify the validity of their data and ensure valid comparisons.

Dr. Westle’s group has studiously tracked its performance metrics for years; other groups may have little experience in this domain. Another critical step in creating dashboard reports, he states, is understanding how the data are collected and ensuring the data are accurate and attributed appropriately.

“The way clinical cases are coded ought to be the subject of some concern and scrutiny,” says John Novotny, MD, director of the Section of Hospital Medicine of the Allen Division at Columbia University Medical Center in New York City and another Benchmarks Committee member. “There may be a natural inclination to accept the performance information provided to us by the hospital, but the processes that generated these data need to be well understood to gauge the accuracy and acceptability of any conclusions drawn.”

With a background in statistics and information technology, Dr. Novotny cautions that “some assessment of the validity of comparisons within or between groups or to benchmark figures should be included in every analysis or report—to justify any conclusions drawn and to avoid the statistical pitfalls common to these data.”

He advises HMGs to run the numbers by someone with expertise in data interpretation, especially before reports are published or submitted for public review. These issues come up frequently in the analysis of frequency data, such as the number of deaths occurring in a group for a particular diagnosis over a period of time, where the numbers might be relatively small.

For example, if five deaths are observed in a subset of 20 patients, the statistic of a 25% death rate comes with such low precision that the true underlying death rate might fall anywhere between 8% and 50%.

“This is a limitation inherent in drawing conclusions from relatively small data sets, akin to driving down a narrow highway with a very loose steering wheel—avoiding the ravines is a challenge,” he says.

Dr. Novotny contributed the section on mortality metrics for the white paper. Although a group’s raw mortality data may be easily obtained, “HMGs should be wary of the smaller numbers resulting from stratifying the data by service, DRG [diagnosis-related group], or time periods,” he explains.

Instead, as suggested in the “Interventions” section, the HMG might want to take the additional approach of documenting the use of processes thought to have a positive impact on the risk of mortality in hospitalized patients. Potentially useful processes under development and discussion in the literature include interdisciplinary rounds, effective inter-provider communication, and ventilator care protocols, among others.

“We need to show that not only do we track our mortality figures, we analyze and respond to them by improving our patient care,” Dr. Novotny says. “We need to show that we’re making patient care safer.”

At the Ochsner Health Center in New Orleans, the HMG decided to track readmission rates for congestive heart failure—the primary DRG for inpatient care, and compare its rates with those of other services. Because heart failure is traditionally the bailiwick of cardiology, “you might think that the cardiology service would have the best outcomes,” says Steven Deitelzweig, MD, vice president of medical affairs and system chairman.

But, using order sets that align with JCAHO standards and best care as demonstrated by evidence in cardiology, Dr. Deitelzweig’s hospitalist group “was able to demonstrate statistically and objectively that our outcomes were better, adjusting for case mix.”

 

 

Make Your Own Case

Once the infrastructure for tracking and reporting productivity is in place, hospitalists can use performance metrics to build their own case, remarks Dr. Kealey. The white paper furnishes several examples of customized dashboards. Some use a visual display to illustrate improvement or maintenance in key performance areas.

Dr. Westle notes that metrics reports can be used in a variety of ways, including:

  • Negotiating with the hospital;
  • Managing a practice internally (i.e., tracking the productivity of established and new full-time equivalent employees (FTEs) and compensating physicians for their productivity); and
  • Negotiating with third-party payers who increasingly rely on pay-for-performance measures. For instance, Dr. Westle says, if a group can track its cost per case for the top 15 DRGs and show those costs are less than the national average, this “puts the hospitalist group at a significant advantage when talking to insurance companies about pay for performance.”

Dr. Deitelzweig reports that his HMG at the Ochsner Health Center posts monthly updates of its dashboard results in the halls of its department and others. “Whether it’s readmission rates, patient satisfaction, or hand washing, it’s up there for all to see,” he says. He believes that this type of transparency is not only a good reminder for staff but benefits patients, as well. “It’s helpful because it highlights for your department members the goals of the department and that those are aligned with patient satisfaction and best outcomes.”

Conclusion

“If hospitalists can work with their hospitals to understand how various data elements are defined, collected and reported,” says Flores, “this will enable them to develop a greater understanding of what the information means, correct any misinterpretations on the hospital’s part, and gain a greater confidence in the information’s credibility and reliability. Hospitalists should work closely with their sponsoring organizations to define metrics and reports that are mutually credible and meaningful, so that all parties are looking at the same things and understanding them the same way.”

Participating in the white paper project gave Dr. Rauch a better appreciation of the value of measuring performance. His advice to first-timers: “It may seem overwhelming at first to do an all-encompassing dashboard, but even if you pick just a couple things to start with, this puts down on paper what your worth is. When you can point to how your services are improving or maintaining over time, that’s the picture that says a thousand words.” TH

Gretchen Henkel is a frequent contributor to The Hospitalist.

Issue
The Hospitalist - 2007(06)
Issue
The Hospitalist - 2007(06)
Publications
Publications
Article Type
Display Headline
A Performance Metrics Primer
Display Headline
A Performance Metrics Primer
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)