Article Type
Changed
Fri, 01/18/2019 - 17:46

Physicists have strict rules about significant figures. Medical journals lack this professional discipline and it produces distortions that mislead readers.

Whenever you measure and report something in physics, the precision of the measurement is reflected in how the value is written. Writing a result with more digits implies that a higher precision was achieved. If that wasn’t the case, you are falsely claiming skill and accomplishment. You’ve entered the zone of post-truth.

This point was taught by my high school physics teacher, Mr. Gunnar Overgaard, may he rest in peace. Suppose we measured the length of the lab table with the meter stick. We repeated the action three times. We computed an average. Our table was 243.7 cm long. If we wrote 243.73 or 243.73333 we got a lower grade. Meter sticks only have markings of 0.1 cm. So the precision of the reported measurement should properly reflect that limitation.

Researchers in medicine seem to have skipped that lesson in physics lab. In medical journals, the default seems to be to report measurements with two decimal points, such as 16.67%, which is a gross distortion of the precision when I know that that really means 2 out of 12 patients had the finding.

This issue of precision came up recently in two papers published about the number of deaths caused by Hurricane Maria in Puerto Rico. The official death toll was 64. This number became a political hot potato when President Trump cited it as if it was evidence that he and the current local government had managed the emergency response better than George W. Bush did for Katrina.

On May 29, 2018, some researchers at the Harvard School of Public Health, a prestigious institution, published an article in The New England Journal of Medicine, a prestigious journal. You would presume that pair could report properly. The abstract said “This rate yielded a total of 4,645 excess deaths during this period (95% CI, 793 to 8,498).”1 Many newspapers published the number 4,645 in a headline. Most newspapers didn’t include all of the scientific mumbo jumbo about bias and confidence intervals.

Business woman sitting at her desk and browsing on her computer.
Georgijevic/E+/Getty Images


However, the number 4,645 did not pass the sniff test at many newspapers, including the Washington Post. Their headline began “Harvard study estimates thousands died”2 and that story went on to clarify that “The Harvard study’s statistical analysis found that deaths related to the hurricane fell within a range of about 800 to more than 8,000.” That is one significant digit. Then the fact checkers went to work on it. They didn’t issue a Pinocchio score, but under a headline of “Did exactly 4,645 people die in Hurricane Maria? Nope”3 the fact checkers concluded that “it’s an egregious example of false precision to cite the ‘4,645’ number without explaining how fuzzy the number really is.”

The situation was compounded 3 days later when another news report had the Puerto Rico Department of Public Health putting the death toll at 1,397. Many assumptions go into determining what an excess death is. If the false precision makes it appear the scientists have a political agenda, it casts shade on whether the assumptions they made are objective and unbiased.

The result on social media was predictable. Outrage was expressed, as always. Lawsuits have been filed. The reputations of all scientists have been impugned. The implication is that, depending on your political polarization, you can choose the number 64, 1,000, 1,400, or 4,645 and any number is just as true as another. Worse, instead of focusing on the severity of the catastrophe and how we might have responded better then and better now and with better planning for the future, the debate has focused on alternative facts and fake scientific news. Thanks, Harvard.

Dr. Kevin T. Powell, a pediatric hospitalist and clinical ethics consultant in St. Louis.
Dr. Kevin T. Powell

So in the spirit of thinking globally but acting locally, what can I do? I love my editor. I have hinted before about how much easier it is to read, as well as more accurate scientifically, to round the numbers that we report. We've done it a few times recently, but now that the Washington Post has done it on a major news story, should this practice become the norm for journalism? If medical journal editors won't handle precision honestly, other journalists must step up. I'm distressed when I review an article that says 14.6% agreed and 79.2% strongly agreed and I know those percentages with 3 digits really mean 7/48 and 38/48, so they should be rounded to two significant figures. And isn’t it easier to read and comprehend if reporting that three treatment groups had positive findings of 4.25%, 12.08%, and 9.84% when rounded to 4%, 12%, and 10%?

Scientists using this false precision (and peer reviewers who allow it) need to be corrected. They are trying to sell their research as a Louis Vuitton handbag when we all know it is only a cheap knockoff.

Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@mdedge.com

References

1. N Eng J Med. 2018 May 29. doi: 10.1056/NEJMsa1803972

2. “Harvard study estimates thousands died in Puerto Rico because of Hurricane Maria,” by Arelis R. Hernández and Laurie McGinley, The Washington Post, May 29, 2018.

3. “Did exactly 4,645 people die in Hurricane Maria? Nope.” by Glenn Kessler, The Washington Post, June 1, 2018.

Publications
Topics
Sections

Physicists have strict rules about significant figures. Medical journals lack this professional discipline and it produces distortions that mislead readers.

Whenever you measure and report something in physics, the precision of the measurement is reflected in how the value is written. Writing a result with more digits implies that a higher precision was achieved. If that wasn’t the case, you are falsely claiming skill and accomplishment. You’ve entered the zone of post-truth.

This point was taught by my high school physics teacher, Mr. Gunnar Overgaard, may he rest in peace. Suppose we measured the length of the lab table with the meter stick. We repeated the action three times. We computed an average. Our table was 243.7 cm long. If we wrote 243.73 or 243.73333 we got a lower grade. Meter sticks only have markings of 0.1 cm. So the precision of the reported measurement should properly reflect that limitation.

Researchers in medicine seem to have skipped that lesson in physics lab. In medical journals, the default seems to be to report measurements with two decimal points, such as 16.67%, which is a gross distortion of the precision when I know that that really means 2 out of 12 patients had the finding.

This issue of precision came up recently in two papers published about the number of deaths caused by Hurricane Maria in Puerto Rico. The official death toll was 64. This number became a political hot potato when President Trump cited it as if it was evidence that he and the current local government had managed the emergency response better than George W. Bush did for Katrina.

On May 29, 2018, some researchers at the Harvard School of Public Health, a prestigious institution, published an article in The New England Journal of Medicine, a prestigious journal. You would presume that pair could report properly. The abstract said “This rate yielded a total of 4,645 excess deaths during this period (95% CI, 793 to 8,498).”1 Many newspapers published the number 4,645 in a headline. Most newspapers didn’t include all of the scientific mumbo jumbo about bias and confidence intervals.

Business woman sitting at her desk and browsing on her computer.
Georgijevic/E+/Getty Images


However, the number 4,645 did not pass the sniff test at many newspapers, including the Washington Post. Their headline began “Harvard study estimates thousands died”2 and that story went on to clarify that “The Harvard study’s statistical analysis found that deaths related to the hurricane fell within a range of about 800 to more than 8,000.” That is one significant digit. Then the fact checkers went to work on it. They didn’t issue a Pinocchio score, but under a headline of “Did exactly 4,645 people die in Hurricane Maria? Nope”3 the fact checkers concluded that “it’s an egregious example of false precision to cite the ‘4,645’ number without explaining how fuzzy the number really is.”

The situation was compounded 3 days later when another news report had the Puerto Rico Department of Public Health putting the death toll at 1,397. Many assumptions go into determining what an excess death is. If the false precision makes it appear the scientists have a political agenda, it casts shade on whether the assumptions they made are objective and unbiased.

The result on social media was predictable. Outrage was expressed, as always. Lawsuits have been filed. The reputations of all scientists have been impugned. The implication is that, depending on your political polarization, you can choose the number 64, 1,000, 1,400, or 4,645 and any number is just as true as another. Worse, instead of focusing on the severity of the catastrophe and how we might have responded better then and better now and with better planning for the future, the debate has focused on alternative facts and fake scientific news. Thanks, Harvard.

Dr. Kevin T. Powell, a pediatric hospitalist and clinical ethics consultant in St. Louis.
Dr. Kevin T. Powell

So in the spirit of thinking globally but acting locally, what can I do? I love my editor. I have hinted before about how much easier it is to read, as well as more accurate scientifically, to round the numbers that we report. We've done it a few times recently, but now that the Washington Post has done it on a major news story, should this practice become the norm for journalism? If medical journal editors won't handle precision honestly, other journalists must step up. I'm distressed when I review an article that says 14.6% agreed and 79.2% strongly agreed and I know those percentages with 3 digits really mean 7/48 and 38/48, so they should be rounded to two significant figures. And isn’t it easier to read and comprehend if reporting that three treatment groups had positive findings of 4.25%, 12.08%, and 9.84% when rounded to 4%, 12%, and 10%?

Scientists using this false precision (and peer reviewers who allow it) need to be corrected. They are trying to sell their research as a Louis Vuitton handbag when we all know it is only a cheap knockoff.

Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@mdedge.com

References

1. N Eng J Med. 2018 May 29. doi: 10.1056/NEJMsa1803972

2. “Harvard study estimates thousands died in Puerto Rico because of Hurricane Maria,” by Arelis R. Hernández and Laurie McGinley, The Washington Post, May 29, 2018.

3. “Did exactly 4,645 people die in Hurricane Maria? Nope.” by Glenn Kessler, The Washington Post, June 1, 2018.

Physicists have strict rules about significant figures. Medical journals lack this professional discipline and it produces distortions that mislead readers.

Whenever you measure and report something in physics, the precision of the measurement is reflected in how the value is written. Writing a result with more digits implies that a higher precision was achieved. If that wasn’t the case, you are falsely claiming skill and accomplishment. You’ve entered the zone of post-truth.

This point was taught by my high school physics teacher, Mr. Gunnar Overgaard, may he rest in peace. Suppose we measured the length of the lab table with the meter stick. We repeated the action three times. We computed an average. Our table was 243.7 cm long. If we wrote 243.73 or 243.73333 we got a lower grade. Meter sticks only have markings of 0.1 cm. So the precision of the reported measurement should properly reflect that limitation.

Researchers in medicine seem to have skipped that lesson in physics lab. In medical journals, the default seems to be to report measurements with two decimal points, such as 16.67%, which is a gross distortion of the precision when I know that that really means 2 out of 12 patients had the finding.

This issue of precision came up recently in two papers published about the number of deaths caused by Hurricane Maria in Puerto Rico. The official death toll was 64. This number became a political hot potato when President Trump cited it as if it was evidence that he and the current local government had managed the emergency response better than George W. Bush did for Katrina.

On May 29, 2018, some researchers at the Harvard School of Public Health, a prestigious institution, published an article in The New England Journal of Medicine, a prestigious journal. You would presume that pair could report properly. The abstract said “This rate yielded a total of 4,645 excess deaths during this period (95% CI, 793 to 8,498).”1 Many newspapers published the number 4,645 in a headline. Most newspapers didn’t include all of the scientific mumbo jumbo about bias and confidence intervals.

Business woman sitting at her desk and browsing on her computer.
Georgijevic/E+/Getty Images


However, the number 4,645 did not pass the sniff test at many newspapers, including the Washington Post. Their headline began “Harvard study estimates thousands died”2 and that story went on to clarify that “The Harvard study’s statistical analysis found that deaths related to the hurricane fell within a range of about 800 to more than 8,000.” That is one significant digit. Then the fact checkers went to work on it. They didn’t issue a Pinocchio score, but under a headline of “Did exactly 4,645 people die in Hurricane Maria? Nope”3 the fact checkers concluded that “it’s an egregious example of false precision to cite the ‘4,645’ number without explaining how fuzzy the number really is.”

The situation was compounded 3 days later when another news report had the Puerto Rico Department of Public Health putting the death toll at 1,397. Many assumptions go into determining what an excess death is. If the false precision makes it appear the scientists have a political agenda, it casts shade on whether the assumptions they made are objective and unbiased.

The result on social media was predictable. Outrage was expressed, as always. Lawsuits have been filed. The reputations of all scientists have been impugned. The implication is that, depending on your political polarization, you can choose the number 64, 1,000, 1,400, or 4,645 and any number is just as true as another. Worse, instead of focusing on the severity of the catastrophe and how we might have responded better then and better now and with better planning for the future, the debate has focused on alternative facts and fake scientific news. Thanks, Harvard.

Dr. Kevin T. Powell, a pediatric hospitalist and clinical ethics consultant in St. Louis.
Dr. Kevin T. Powell

So in the spirit of thinking globally but acting locally, what can I do? I love my editor. I have hinted before about how much easier it is to read, as well as more accurate scientifically, to round the numbers that we report. We've done it a few times recently, but now that the Washington Post has done it on a major news story, should this practice become the norm for journalism? If medical journal editors won't handle precision honestly, other journalists must step up. I'm distressed when I review an article that says 14.6% agreed and 79.2% strongly agreed and I know those percentages with 3 digits really mean 7/48 and 38/48, so they should be rounded to two significant figures. And isn’t it easier to read and comprehend if reporting that three treatment groups had positive findings of 4.25%, 12.08%, and 9.84% when rounded to 4%, 12%, and 10%?

Scientists using this false precision (and peer reviewers who allow it) need to be corrected. They are trying to sell their research as a Louis Vuitton handbag when we all know it is only a cheap knockoff.

Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at pdnews@mdedge.com

References

1. N Eng J Med. 2018 May 29. doi: 10.1056/NEJMsa1803972

2. “Harvard study estimates thousands died in Puerto Rico because of Hurricane Maria,” by Arelis R. Hernández and Laurie McGinley, The Washington Post, May 29, 2018.

3. “Did exactly 4,645 people die in Hurricane Maria? Nope.” by Glenn Kessler, The Washington Post, June 1, 2018.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica