Article Type
Changed
Fri, 09/14/2018 - 12:36
Display Headline
Data Daze

As I write this column, our nation’s economy is looking pretty shaky. In early January, the labor market report showed a significant uptick in the unemployment rate. This led worried investors to sell stocks, and the Federal Reserve lowered the interest rate dramatically in response to the sudden fall in the stock market. Or at least that’s the version of events we’re being fed by most of the press.

Do investors overreact to job numbers? There is a lot of debate about the accuracy of job and unemployment statistics. Clearly they are valuable, but there are all kinds of problems with the way the labor surveys are conducted and the resulting data analyzed.

There may not be a better way to collect the data, so despite their flaws surveys may provide the best information on the labor market that we can get.

The real problem arises when investors—sort of like you and me but a lot richer—look at these data and fail to keep in mind all its strengths and weaknesses. There is a risk people will focus on a single number and overestimate its precision. This has been called the “salience bias.” Writing in The New Yorker, James Surowiecki says this salience bias can lead to “a hard-to-break feedback loop: The fact that traders act as if the jobs report were definitive makes it so. A little information can be a dangerous thing.”1

It can be valuable to trend some data over successive surveys. For example, you may want to know the trend in the average hospitalist’s wRVU productivity over the past two surveys.

I discuss hospitalist survey data with people all the time. I’m struck by how often they seem misled by salience bias, among other things. With SHM’s release this month of its latest biannual survey of hospitalist productivity and compensation, now seems like a good time to discuss the strengths and weaknesses in the data—and cautions when interpreting it.

Understand the strengths and limitations of the survey. SHM’s “Bi-Annual Survey on the State of the Hospital Medicine Movement” is a self-reporting survey in which each practice leader (or his/her designee) completes the survey. The responses aren’t verified or audited, so some respondents might submit shoddy data. Perhaps a busy group leader might complete the survey from memory and estimate things like each doctor’s production of work-only relative value units (wRVUs). When I’ve looked at the raw data, I’ve wondered if some respondents are trying to “spin” their numbers higher or lower for a variety of reasons (e.g., to look unusually good or show how hard their doctors can work). And there may be a response bias: Those who think their practice is atypical might not respond to the survey.

This year, SHM worked to “scrub” the data. Outlier metrics were established for each question, and SHM staff followed up with the respondent to ensure he/she understood the question and provided accurate data. In fact, I completed the survey for the group I’m part of and got a call from a survey staffer questioning the productivity I reported for some members of the group (our nocturnists have lower wRVU productivity than others in the group—that is one reason they’re willing to work at night).

Remember that data are historical and should be “aged” to the time period you’re using it. The data in the 2008 SHM survey were collected from October through December.

Review the original questions asked in the survey. To make sense of the survey responses, you will need to clearly understand the question asked. Review the survey instrument and form your own conclusions about ways the questions were posed that might influence the responses. And don’t assume you understand what a particular term means—verify it by looking at the survey instrument. For example, I encounter a wide variety of opinions regarding what constitutes base salary, incentive pay, productivity compensation, bonus, and total compensation. The survey instrument spells these things out clearly.

 

 

It can be valuable to trend some data over successive surveys. For example, you may want to know the trend in the average hospitalist’s wRVU productivity over the past two surveys. You should look at the questions asked in both surveys to make sure there hasn’t been a change that could influence the responses. In the case of wRVUs you will need to understand how the January 2007 change in wRVUs values for many services provided by hospitalists was handled by the survey.

Pay attention to how data are lumped or split. Some data are appropriate for analysis of hospitalist groups, other data for individual hospitalists. If half of hospitalist groups use a shift-based schedule, that doesn’t mean half of individual hospitalists work such a schedule. Shift-based schedules are more common in larger groups, so even if half the groups in the country schedule by shifts, there may be 80% of individual hospitalists who use this schedule.

Salary incentives illustrate another way responses are lumped or split. As of the last survey (reported in 2006), most hospitalists had a variable component to their compensation—most often based on productivity or quality. There are relatively few ways hospitalists are paid on productivity (basing it on wRVUs is most common). But there are myriad quality incentives, based on things like Centers for Medicare and Medicaid Services core measures, and patient and referring physician satisfaction. Depending on how you aggregated these different categories in the 2006 survey, you might reach different conclusions about whether more hospitalists have productivity-based incentives or quality-based incentives (productivity was more common in the 2006 survey).

Drill down to respondent populations that most closely match your group. There is a real temptation to overemphasize the “headline” numbers in the survey like the average total salary for a hospitalist. Yet in many cases, it may be more useful to drill down to a population that matches your group. You might be most interested in compensation for hospital-employed hospitalists in non-teaching hospitals in the South (thereby excluding academicians and pediatric hospitalists from your comparison group). Just make sure to look at the resultant sample size (the “n”) reported for that subset of the data to make sure it is large enough to be meaningful.

Remember that the survey is not telling you what is right for your group. The survey simply describes a number of metrics relevant to hospitalist practice. It is not SHM’s position on the right compensation or productivity for a particular practice. It is the best source of national data regarding hospitalists. (See my column “Comp Close-Up” for a comparison between the SHM and Medical Group Management Association surveys [July 2007, p.73]). Things like the two other hospitalist practices in your town probably will have a lot more to do with influencing your group’s productivity and compensation metrics than any national data set.

While it’s tempting to reduce things to a single number (e.g., how much is the average hospitalist paid?) this is falling prey to salience bias. Try to grasp the stories behind the numbers by understanding the survey methods and looking at responses for different subsets of the survey population. And realize that the right or optimal compensation and productivity for a group might be quite different from the survey means and medians. TH

Dr. Nelson has been a practicing hospitalist since 1988 and is co-founder and past president of SHM. He is a principal in Nelson/Flores Associates, a national hospitalist practice management consulting firm. He is also part of the faculty for SHM’s “Best Practices in Managing a Hospital Medicine Program” course. This column represents his views and is not intended to reflect an official position of SHM.

 

 

Reference

  1. Surowiecki J. Running numbers. The New Yorker. January 21, 2008. Available online at www.newyorker.com/talk/financial/2008/01/21/080121ta_talk_surowiecki. Last accessed Feb. 7, 2008.
Issue
The Hospitalist - 2008(04)
Publications
Sections

As I write this column, our nation’s economy is looking pretty shaky. In early January, the labor market report showed a significant uptick in the unemployment rate. This led worried investors to sell stocks, and the Federal Reserve lowered the interest rate dramatically in response to the sudden fall in the stock market. Or at least that’s the version of events we’re being fed by most of the press.

Do investors overreact to job numbers? There is a lot of debate about the accuracy of job and unemployment statistics. Clearly they are valuable, but there are all kinds of problems with the way the labor surveys are conducted and the resulting data analyzed.

There may not be a better way to collect the data, so despite their flaws surveys may provide the best information on the labor market that we can get.

The real problem arises when investors—sort of like you and me but a lot richer—look at these data and fail to keep in mind all its strengths and weaknesses. There is a risk people will focus on a single number and overestimate its precision. This has been called the “salience bias.” Writing in The New Yorker, James Surowiecki says this salience bias can lead to “a hard-to-break feedback loop: The fact that traders act as if the jobs report were definitive makes it so. A little information can be a dangerous thing.”1

It can be valuable to trend some data over successive surveys. For example, you may want to know the trend in the average hospitalist’s wRVU productivity over the past two surveys.

I discuss hospitalist survey data with people all the time. I’m struck by how often they seem misled by salience bias, among other things. With SHM’s release this month of its latest biannual survey of hospitalist productivity and compensation, now seems like a good time to discuss the strengths and weaknesses in the data—and cautions when interpreting it.

Understand the strengths and limitations of the survey. SHM’s “Bi-Annual Survey on the State of the Hospital Medicine Movement” is a self-reporting survey in which each practice leader (or his/her designee) completes the survey. The responses aren’t verified or audited, so some respondents might submit shoddy data. Perhaps a busy group leader might complete the survey from memory and estimate things like each doctor’s production of work-only relative value units (wRVUs). When I’ve looked at the raw data, I’ve wondered if some respondents are trying to “spin” their numbers higher or lower for a variety of reasons (e.g., to look unusually good or show how hard their doctors can work). And there may be a response bias: Those who think their practice is atypical might not respond to the survey.

This year, SHM worked to “scrub” the data. Outlier metrics were established for each question, and SHM staff followed up with the respondent to ensure he/she understood the question and provided accurate data. In fact, I completed the survey for the group I’m part of and got a call from a survey staffer questioning the productivity I reported for some members of the group (our nocturnists have lower wRVU productivity than others in the group—that is one reason they’re willing to work at night).

Remember that data are historical and should be “aged” to the time period you’re using it. The data in the 2008 SHM survey were collected from October through December.

Review the original questions asked in the survey. To make sense of the survey responses, you will need to clearly understand the question asked. Review the survey instrument and form your own conclusions about ways the questions were posed that might influence the responses. And don’t assume you understand what a particular term means—verify it by looking at the survey instrument. For example, I encounter a wide variety of opinions regarding what constitutes base salary, incentive pay, productivity compensation, bonus, and total compensation. The survey instrument spells these things out clearly.

 

 

It can be valuable to trend some data over successive surveys. For example, you may want to know the trend in the average hospitalist’s wRVU productivity over the past two surveys. You should look at the questions asked in both surveys to make sure there hasn’t been a change that could influence the responses. In the case of wRVUs you will need to understand how the January 2007 change in wRVUs values for many services provided by hospitalists was handled by the survey.

Pay attention to how data are lumped or split. Some data are appropriate for analysis of hospitalist groups, other data for individual hospitalists. If half of hospitalist groups use a shift-based schedule, that doesn’t mean half of individual hospitalists work such a schedule. Shift-based schedules are more common in larger groups, so even if half the groups in the country schedule by shifts, there may be 80% of individual hospitalists who use this schedule.

Salary incentives illustrate another way responses are lumped or split. As of the last survey (reported in 2006), most hospitalists had a variable component to their compensation—most often based on productivity or quality. There are relatively few ways hospitalists are paid on productivity (basing it on wRVUs is most common). But there are myriad quality incentives, based on things like Centers for Medicare and Medicaid Services core measures, and patient and referring physician satisfaction. Depending on how you aggregated these different categories in the 2006 survey, you might reach different conclusions about whether more hospitalists have productivity-based incentives or quality-based incentives (productivity was more common in the 2006 survey).

Drill down to respondent populations that most closely match your group. There is a real temptation to overemphasize the “headline” numbers in the survey like the average total salary for a hospitalist. Yet in many cases, it may be more useful to drill down to a population that matches your group. You might be most interested in compensation for hospital-employed hospitalists in non-teaching hospitals in the South (thereby excluding academicians and pediatric hospitalists from your comparison group). Just make sure to look at the resultant sample size (the “n”) reported for that subset of the data to make sure it is large enough to be meaningful.

Remember that the survey is not telling you what is right for your group. The survey simply describes a number of metrics relevant to hospitalist practice. It is not SHM’s position on the right compensation or productivity for a particular practice. It is the best source of national data regarding hospitalists. (See my column “Comp Close-Up” for a comparison between the SHM and Medical Group Management Association surveys [July 2007, p.73]). Things like the two other hospitalist practices in your town probably will have a lot more to do with influencing your group’s productivity and compensation metrics than any national data set.

While it’s tempting to reduce things to a single number (e.g., how much is the average hospitalist paid?) this is falling prey to salience bias. Try to grasp the stories behind the numbers by understanding the survey methods and looking at responses for different subsets of the survey population. And realize that the right or optimal compensation and productivity for a group might be quite different from the survey means and medians. TH

Dr. Nelson has been a practicing hospitalist since 1988 and is co-founder and past president of SHM. He is a principal in Nelson/Flores Associates, a national hospitalist practice management consulting firm. He is also part of the faculty for SHM’s “Best Practices in Managing a Hospital Medicine Program” course. This column represents his views and is not intended to reflect an official position of SHM.

 

 

Reference

  1. Surowiecki J. Running numbers. The New Yorker. January 21, 2008. Available online at www.newyorker.com/talk/financial/2008/01/21/080121ta_talk_surowiecki. Last accessed Feb. 7, 2008.

As I write this column, our nation’s economy is looking pretty shaky. In early January, the labor market report showed a significant uptick in the unemployment rate. This led worried investors to sell stocks, and the Federal Reserve lowered the interest rate dramatically in response to the sudden fall in the stock market. Or at least that’s the version of events we’re being fed by most of the press.

Do investors overreact to job numbers? There is a lot of debate about the accuracy of job and unemployment statistics. Clearly they are valuable, but there are all kinds of problems with the way the labor surveys are conducted and the resulting data analyzed.

There may not be a better way to collect the data, so despite their flaws surveys may provide the best information on the labor market that we can get.

The real problem arises when investors—sort of like you and me but a lot richer—look at these data and fail to keep in mind all its strengths and weaknesses. There is a risk people will focus on a single number and overestimate its precision. This has been called the “salience bias.” Writing in The New Yorker, James Surowiecki says this salience bias can lead to “a hard-to-break feedback loop: The fact that traders act as if the jobs report were definitive makes it so. A little information can be a dangerous thing.”1

It can be valuable to trend some data over successive surveys. For example, you may want to know the trend in the average hospitalist’s wRVU productivity over the past two surveys.

I discuss hospitalist survey data with people all the time. I’m struck by how often they seem misled by salience bias, among other things. With SHM’s release this month of its latest biannual survey of hospitalist productivity and compensation, now seems like a good time to discuss the strengths and weaknesses in the data—and cautions when interpreting it.

Understand the strengths and limitations of the survey. SHM’s “Bi-Annual Survey on the State of the Hospital Medicine Movement” is a self-reporting survey in which each practice leader (or his/her designee) completes the survey. The responses aren’t verified or audited, so some respondents might submit shoddy data. Perhaps a busy group leader might complete the survey from memory and estimate things like each doctor’s production of work-only relative value units (wRVUs). When I’ve looked at the raw data, I’ve wondered if some respondents are trying to “spin” their numbers higher or lower for a variety of reasons (e.g., to look unusually good or show how hard their doctors can work). And there may be a response bias: Those who think their practice is atypical might not respond to the survey.

This year, SHM worked to “scrub” the data. Outlier metrics were established for each question, and SHM staff followed up with the respondent to ensure he/she understood the question and provided accurate data. In fact, I completed the survey for the group I’m part of and got a call from a survey staffer questioning the productivity I reported for some members of the group (our nocturnists have lower wRVU productivity than others in the group—that is one reason they’re willing to work at night).

Remember that data are historical and should be “aged” to the time period you’re using it. The data in the 2008 SHM survey were collected from October through December.

Review the original questions asked in the survey. To make sense of the survey responses, you will need to clearly understand the question asked. Review the survey instrument and form your own conclusions about ways the questions were posed that might influence the responses. And don’t assume you understand what a particular term means—verify it by looking at the survey instrument. For example, I encounter a wide variety of opinions regarding what constitutes base salary, incentive pay, productivity compensation, bonus, and total compensation. The survey instrument spells these things out clearly.

 

 

It can be valuable to trend some data over successive surveys. For example, you may want to know the trend in the average hospitalist’s wRVU productivity over the past two surveys. You should look at the questions asked in both surveys to make sure there hasn’t been a change that could influence the responses. In the case of wRVUs you will need to understand how the January 2007 change in wRVUs values for many services provided by hospitalists was handled by the survey.

Pay attention to how data are lumped or split. Some data are appropriate for analysis of hospitalist groups, other data for individual hospitalists. If half of hospitalist groups use a shift-based schedule, that doesn’t mean half of individual hospitalists work such a schedule. Shift-based schedules are more common in larger groups, so even if half the groups in the country schedule by shifts, there may be 80% of individual hospitalists who use this schedule.

Salary incentives illustrate another way responses are lumped or split. As of the last survey (reported in 2006), most hospitalists had a variable component to their compensation—most often based on productivity or quality. There are relatively few ways hospitalists are paid on productivity (basing it on wRVUs is most common). But there are myriad quality incentives, based on things like Centers for Medicare and Medicaid Services core measures, and patient and referring physician satisfaction. Depending on how you aggregated these different categories in the 2006 survey, you might reach different conclusions about whether more hospitalists have productivity-based incentives or quality-based incentives (productivity was more common in the 2006 survey).

Drill down to respondent populations that most closely match your group. There is a real temptation to overemphasize the “headline” numbers in the survey like the average total salary for a hospitalist. Yet in many cases, it may be more useful to drill down to a population that matches your group. You might be most interested in compensation for hospital-employed hospitalists in non-teaching hospitals in the South (thereby excluding academicians and pediatric hospitalists from your comparison group). Just make sure to look at the resultant sample size (the “n”) reported for that subset of the data to make sure it is large enough to be meaningful.

Remember that the survey is not telling you what is right for your group. The survey simply describes a number of metrics relevant to hospitalist practice. It is not SHM’s position on the right compensation or productivity for a particular practice. It is the best source of national data regarding hospitalists. (See my column “Comp Close-Up” for a comparison between the SHM and Medical Group Management Association surveys [July 2007, p.73]). Things like the two other hospitalist practices in your town probably will have a lot more to do with influencing your group’s productivity and compensation metrics than any national data set.

While it’s tempting to reduce things to a single number (e.g., how much is the average hospitalist paid?) this is falling prey to salience bias. Try to grasp the stories behind the numbers by understanding the survey methods and looking at responses for different subsets of the survey population. And realize that the right or optimal compensation and productivity for a group might be quite different from the survey means and medians. TH

Dr. Nelson has been a practicing hospitalist since 1988 and is co-founder and past president of SHM. He is a principal in Nelson/Flores Associates, a national hospitalist practice management consulting firm. He is also part of the faculty for SHM’s “Best Practices in Managing a Hospital Medicine Program” course. This column represents his views and is not intended to reflect an official position of SHM.

 

 

Reference

  1. Surowiecki J. Running numbers. The New Yorker. January 21, 2008. Available online at www.newyorker.com/talk/financial/2008/01/21/080121ta_talk_surowiecki. Last accessed Feb. 7, 2008.
Issue
The Hospitalist - 2008(04)
Issue
The Hospitalist - 2008(04)
Publications
Publications
Article Type
Display Headline
Data Daze
Display Headline
Data Daze
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)