Why Cardiac Biomarkers Don’t Help Predict Heart Disease

Article Type
Changed
Wed, 05/15/2024 - 17:03

 

This transcript has been edited for clarity. 

It’s the counterintuitive stuff in epidemiology that always really interests me. One intuition many of us have is that if a risk factor is significantly associated with an outcome, knowledge of that risk factor would help to predict that outcome. Makes sense. Feels right.

But it’s not right. Not always.

Here’s a fake example to illustrate my point. Let’s say we have 10,000 individuals who we follow for 10 years and 2000 of them die. (It’s been a rough decade.) At baseline, I measured a novel biomarker, the Perry Factor, in everyone. To keep it simple, the Perry Factor has only two values: 0 or 1. 

I then do a standard associational analysis and find that individuals who are positive for the Perry Factor have a 40-fold higher odds of death than those who are negative for it. I am beginning to reconsider ascribing my good name to this biomarker. This is a highly statistically significant result — a P value <.001. 

Clearly, knowledge of the Perry Factor should help me predict who will die in the cohort. I evaluate predictive power using a metric called the area under the receiver operating characteristic curve (AUC, referred to as the C-statistic in time-to-event studies). It tells you, given two people — one who dies and one who doesn’t — how frequently you “pick” the right person, given the knowledge of their Perry Factor.

A C-statistic of 0.5, or 50%, would mean the Perry Factor gives you no better results than a coin flip; it’s chance. A C-statistic of 1 is perfect prediction. So, what will the C-statistic be, given the incredibly strong association of the Perry Factor with outcomes? 0.9? 0.95?

0.5024. Almost useless.


setocishislewupohechobojepahukathitheuatroslujubredroshicluluvavotokijiswolisefrocleclapegidushedrilithecotrabroswedrewrepetricrechocruthaphilipaspehamothefratro


Let’s figure out why strength of association and usefulness for prediction are not always the same thing.

I constructed my fake Perry Factor dataset quite carefully to illustrate this point. Let me show you what happened. What you see here is a breakdown of the patients in my fake study. You can see that just 11 of them were Perry Factor positive, but 10 of those 11 ended up dying.

chucawrukogakudruprurichawaswuchasugohadretidruslidreswumadrispauicluclisluwomapepuciclelotasudraclurouelecujamushutabreclagiwrewafravicasivistuthopodricluthanojemispofronahocrilaslispepredoshacecrustadrocajeswehepruwupr
 

That’s quite unlikely by chance alone. It really does appear that if you have Perry Factor, your risk for death is much higher. But the reason that Perry Factor is a bad predictor is because it is so rare in the population. Sure, you can use it to correctly predict the outcome of 10 of the 11 people who have it, but the vast majority of people don’t have Perry Factor. It’s useless to distinguish who will die vs who will live in that population.

Why have I spent so much time trying to reverse our intuition that strength of association and strength of predictive power must be related? Because it helps to explain this paper, “Prognostic Value of Cardiovascular Biomarkers in the Population,” appearing in JAMA, which is a very nice piece of work trying to help us better predict cardiovascular disease.

I don’t need to tell you that cardiovascular disease is the number-one killer in this country and most of the world. I don’t need to tell you that we have really good preventive therapies and lifestyle interventions that can reduce the risk. But it would be nice to know in whom, specifically, we should use those interventions.

Cardiovascular risk scores, to date, are pretty simple. The most common one in use in the United States, the pooled cohort risk equation, has nine variables, two of which require a cholesterol panel and one a blood pressure test. It’s easy and it’s pretty accurate.

stophechespothakafralidafricraspushuslakiserechovopreslujiwrolosatrabigespugudathogiwritholojuvaperowrapriveslukulihohaslatri


Using the score from the pooled cohort risk calculator, you get a C-statistic as high as 0.82 when applied to Black women, a low of 0.71 when applied to Black men. Non-Black individuals are in the middle. Not bad. But, clearly, not perfect.

And aren’t we in the era of big data, the era of personalized medicine? We have dozens, maybe hundreds, of quantifiable biomarkers that are associated with subsequent heart disease. Surely, by adding these biomarkers into the risk equation, we can improve prediction. Right?

The JAMA study includes 164,054 patients pooled from 28 cohort studies from 12 countries. All the studies measured various key biomarkers at baseline and followed their participants for cardiovascular events like heart attack, stroke, coronary revascularization, and so on.

The biomarkers in question are really the big guns in this space: troponin, a marker of stress on the heart muscle; NT-proBNP, a marker of stretch on the heart muscle; and C-reactive protein, a marker of inflammation. In every case, higher levels of these markers at baseline were associated with a higher risk for cardiovascular disease in the future.

Troponin T, shown here, has a basically linear risk with subsequent cardiovascular disease.

provumislimimuvajisweshibrukadriuopreuukeuudouufridreshifrostetrodramase


BNP seems to demonstrate more of a threshold effect, where levels above 60 start to associate with problems.

thuciswowicrowochiwapucrujipricrivisijidiclebivepraspuspudreve


And CRP does a similar thing, with levels above 1.

drajicelacirufrehastokothoswuhewroprabakopheslipufrathebrathugur


All of these findings were statistically significant. If you have higher levels of one or more of these biomarkers, you are more likely to have cardiovascular disease in the future.

Of course, our old friend the pooled cohort risk equation is still here — in the background — requiring just that one blood test and measurement of blood pressure. Let’s talk about predictive power.

The pooled cohort risk equation score, in this study, had a C-statistic of 0.812.

By adding troponin, BNP, and CRP to the equation, the new C-statistic is 0.819. Barely any change.

guslubrapeswacluvusochowuswutawunetrarawribripotregashihetroslovorophamaspashestokaclugaspifrobrusochasowreprokulapros


Now, the authors looked at different types of prediction here. The greatest improvement in the AUC was seen when they tried to predict heart failure within 1 year of measurement; there the AUC improved by 0.04. But the presence of BNP as a biomarker and the short time window of 1 year makes me wonder whether this is really prediction at all or whether they were essentially just diagnosing people with existing heart failure.

 

 

Why does this happen? Why do these promising biomarkers, clearly associated with bad outcomes, fail to improve our ability to predict the future? I already gave one example, which has to do with how the markers are distributed in the population. But even more relevant here is that the new markers will only improve prediction insofar as they are not already represented in the old predictive model. 

Of course, BNP, for example, wasn’t in the old model. But smoking was. Diabetes was. Blood pressure was. All of that data might actually tell you something about the patient’s BNP through their mutual correlation. And improvement in prediction requires new information. 

This is actually why I consider this a really successful study. We need to do studies like this to help us find what those new sources of information might be. It doesn’t seem like these biomarkers will help us in our effort to risk-stratify people. So, we move on to other domains. Perhaps social determinants of health would improve risk prediction. Perhaps insurance status. Perhaps environmental exposures. Perhaps markers of stress.

We will never get to a C-statistic of 1. Perfect prediction is the domain of palm readers and astrophysicists. But better prediction is always possible through data. The big question, of course, is which data?
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

This transcript has been edited for clarity. 

It’s the counterintuitive stuff in epidemiology that always really interests me. One intuition many of us have is that if a risk factor is significantly associated with an outcome, knowledge of that risk factor would help to predict that outcome. Makes sense. Feels right.

But it’s not right. Not always.

Here’s a fake example to illustrate my point. Let’s say we have 10,000 individuals who we follow for 10 years and 2000 of them die. (It’s been a rough decade.) At baseline, I measured a novel biomarker, the Perry Factor, in everyone. To keep it simple, the Perry Factor has only two values: 0 or 1. 

I then do a standard associational analysis and find that individuals who are positive for the Perry Factor have a 40-fold higher odds of death than those who are negative for it. I am beginning to reconsider ascribing my good name to this biomarker. This is a highly statistically significant result — a P value <.001. 

Clearly, knowledge of the Perry Factor should help me predict who will die in the cohort. I evaluate predictive power using a metric called the area under the receiver operating characteristic curve (AUC, referred to as the C-statistic in time-to-event studies). It tells you, given two people — one who dies and one who doesn’t — how frequently you “pick” the right person, given the knowledge of their Perry Factor.

A C-statistic of 0.5, or 50%, would mean the Perry Factor gives you no better results than a coin flip; it’s chance. A C-statistic of 1 is perfect prediction. So, what will the C-statistic be, given the incredibly strong association of the Perry Factor with outcomes? 0.9? 0.95?

0.5024. Almost useless.


setocishislewupohechobojepahukathitheuatroslujubredroshicluluvavotokijiswolisefrocleclapegidushedrilithecotrabroswedrewrepetricrechocruthaphilipaspehamothefratro


Let’s figure out why strength of association and usefulness for prediction are not always the same thing.

I constructed my fake Perry Factor dataset quite carefully to illustrate this point. Let me show you what happened. What you see here is a breakdown of the patients in my fake study. You can see that just 11 of them were Perry Factor positive, but 10 of those 11 ended up dying.

chucawrukogakudruprurichawaswuchasugohadretidruslidreswumadrispauicluclisluwomapepuciclelotasudraclurouelecujamushutabreclagiwrewafravicasivistuthopodricluthanojemispofronahocrilaslispepredoshacecrustadrocajeswehepruwupr
 

That’s quite unlikely by chance alone. It really does appear that if you have Perry Factor, your risk for death is much higher. But the reason that Perry Factor is a bad predictor is because it is so rare in the population. Sure, you can use it to correctly predict the outcome of 10 of the 11 people who have it, but the vast majority of people don’t have Perry Factor. It’s useless to distinguish who will die vs who will live in that population.

Why have I spent so much time trying to reverse our intuition that strength of association and strength of predictive power must be related? Because it helps to explain this paper, “Prognostic Value of Cardiovascular Biomarkers in the Population,” appearing in JAMA, which is a very nice piece of work trying to help us better predict cardiovascular disease.

I don’t need to tell you that cardiovascular disease is the number-one killer in this country and most of the world. I don’t need to tell you that we have really good preventive therapies and lifestyle interventions that can reduce the risk. But it would be nice to know in whom, specifically, we should use those interventions.

Cardiovascular risk scores, to date, are pretty simple. The most common one in use in the United States, the pooled cohort risk equation, has nine variables, two of which require a cholesterol panel and one a blood pressure test. It’s easy and it’s pretty accurate.

stophechespothakafralidafricraspushuslakiserechovopreslujiwrolosatrabigespugudathogiwritholojuvaperowrapriveslukulihohaslatri


Using the score from the pooled cohort risk calculator, you get a C-statistic as high as 0.82 when applied to Black women, a low of 0.71 when applied to Black men. Non-Black individuals are in the middle. Not bad. But, clearly, not perfect.

And aren’t we in the era of big data, the era of personalized medicine? We have dozens, maybe hundreds, of quantifiable biomarkers that are associated with subsequent heart disease. Surely, by adding these biomarkers into the risk equation, we can improve prediction. Right?

The JAMA study includes 164,054 patients pooled from 28 cohort studies from 12 countries. All the studies measured various key biomarkers at baseline and followed their participants for cardiovascular events like heart attack, stroke, coronary revascularization, and so on.

The biomarkers in question are really the big guns in this space: troponin, a marker of stress on the heart muscle; NT-proBNP, a marker of stretch on the heart muscle; and C-reactive protein, a marker of inflammation. In every case, higher levels of these markers at baseline were associated with a higher risk for cardiovascular disease in the future.

Troponin T, shown here, has a basically linear risk with subsequent cardiovascular disease.

provumislimimuvajisweshibrukadriuopreuukeuudouufridreshifrostetrodramase


BNP seems to demonstrate more of a threshold effect, where levels above 60 start to associate with problems.

thuciswowicrowochiwapucrujipricrivisijidiclebivepraspuspudreve


And CRP does a similar thing, with levels above 1.

drajicelacirufrehastokothoswuhewroprabakopheslipufrathebrathugur


All of these findings were statistically significant. If you have higher levels of one or more of these biomarkers, you are more likely to have cardiovascular disease in the future.

Of course, our old friend the pooled cohort risk equation is still here — in the background — requiring just that one blood test and measurement of blood pressure. Let’s talk about predictive power.

The pooled cohort risk equation score, in this study, had a C-statistic of 0.812.

By adding troponin, BNP, and CRP to the equation, the new C-statistic is 0.819. Barely any change.

guslubrapeswacluvusochowuswutawunetrarawribripotregashihetroslovorophamaspashestokaclugaspifrobrusochasowreprokulapros


Now, the authors looked at different types of prediction here. The greatest improvement in the AUC was seen when they tried to predict heart failure within 1 year of measurement; there the AUC improved by 0.04. But the presence of BNP as a biomarker and the short time window of 1 year makes me wonder whether this is really prediction at all or whether they were essentially just diagnosing people with existing heart failure.

 

 

Why does this happen? Why do these promising biomarkers, clearly associated with bad outcomes, fail to improve our ability to predict the future? I already gave one example, which has to do with how the markers are distributed in the population. But even more relevant here is that the new markers will only improve prediction insofar as they are not already represented in the old predictive model. 

Of course, BNP, for example, wasn’t in the old model. But smoking was. Diabetes was. Blood pressure was. All of that data might actually tell you something about the patient’s BNP through their mutual correlation. And improvement in prediction requires new information. 

This is actually why I consider this a really successful study. We need to do studies like this to help us find what those new sources of information might be. It doesn’t seem like these biomarkers will help us in our effort to risk-stratify people. So, we move on to other domains. Perhaps social determinants of health would improve risk prediction. Perhaps insurance status. Perhaps environmental exposures. Perhaps markers of stress.

We will never get to a C-statistic of 1. Perfect prediction is the domain of palm readers and astrophysicists. But better prediction is always possible through data. The big question, of course, is which data?
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

 

This transcript has been edited for clarity. 

It’s the counterintuitive stuff in epidemiology that always really interests me. One intuition many of us have is that if a risk factor is significantly associated with an outcome, knowledge of that risk factor would help to predict that outcome. Makes sense. Feels right.

But it’s not right. Not always.

Here’s a fake example to illustrate my point. Let’s say we have 10,000 individuals who we follow for 10 years and 2000 of them die. (It’s been a rough decade.) At baseline, I measured a novel biomarker, the Perry Factor, in everyone. To keep it simple, the Perry Factor has only two values: 0 or 1. 

I then do a standard associational analysis and find that individuals who are positive for the Perry Factor have a 40-fold higher odds of death than those who are negative for it. I am beginning to reconsider ascribing my good name to this biomarker. This is a highly statistically significant result — a P value <.001. 

Clearly, knowledge of the Perry Factor should help me predict who will die in the cohort. I evaluate predictive power using a metric called the area under the receiver operating characteristic curve (AUC, referred to as the C-statistic in time-to-event studies). It tells you, given two people — one who dies and one who doesn’t — how frequently you “pick” the right person, given the knowledge of their Perry Factor.

A C-statistic of 0.5, or 50%, would mean the Perry Factor gives you no better results than a coin flip; it’s chance. A C-statistic of 1 is perfect prediction. So, what will the C-statistic be, given the incredibly strong association of the Perry Factor with outcomes? 0.9? 0.95?

0.5024. Almost useless.


setocishislewupohechobojepahukathitheuatroslujubredroshicluluvavotokijiswolisefrocleclapegidushedrilithecotrabroswedrewrepetricrechocruthaphilipaspehamothefratro


Let’s figure out why strength of association and usefulness for prediction are not always the same thing.

I constructed my fake Perry Factor dataset quite carefully to illustrate this point. Let me show you what happened. What you see here is a breakdown of the patients in my fake study. You can see that just 11 of them were Perry Factor positive, but 10 of those 11 ended up dying.

chucawrukogakudruprurichawaswuchasugohadretidruslidreswumadrispauicluclisluwomapepuciclelotasudraclurouelecujamushutabreclagiwrewafravicasivistuthopodricluthanojemispofronahocrilaslispepredoshacecrustadrocajeswehepruwupr
 

That’s quite unlikely by chance alone. It really does appear that if you have Perry Factor, your risk for death is much higher. But the reason that Perry Factor is a bad predictor is because it is so rare in the population. Sure, you can use it to correctly predict the outcome of 10 of the 11 people who have it, but the vast majority of people don’t have Perry Factor. It’s useless to distinguish who will die vs who will live in that population.

Why have I spent so much time trying to reverse our intuition that strength of association and strength of predictive power must be related? Because it helps to explain this paper, “Prognostic Value of Cardiovascular Biomarkers in the Population,” appearing in JAMA, which is a very nice piece of work trying to help us better predict cardiovascular disease.

I don’t need to tell you that cardiovascular disease is the number-one killer in this country and most of the world. I don’t need to tell you that we have really good preventive therapies and lifestyle interventions that can reduce the risk. But it would be nice to know in whom, specifically, we should use those interventions.

Cardiovascular risk scores, to date, are pretty simple. The most common one in use in the United States, the pooled cohort risk equation, has nine variables, two of which require a cholesterol panel and one a blood pressure test. It’s easy and it’s pretty accurate.

stophechespothakafralidafricraspushuslakiserechovopreslujiwrolosatrabigespugudathogiwritholojuvaperowrapriveslukulihohaslatri


Using the score from the pooled cohort risk calculator, you get a C-statistic as high as 0.82 when applied to Black women, a low of 0.71 when applied to Black men. Non-Black individuals are in the middle. Not bad. But, clearly, not perfect.

And aren’t we in the era of big data, the era of personalized medicine? We have dozens, maybe hundreds, of quantifiable biomarkers that are associated with subsequent heart disease. Surely, by adding these biomarkers into the risk equation, we can improve prediction. Right?

The JAMA study includes 164,054 patients pooled from 28 cohort studies from 12 countries. All the studies measured various key biomarkers at baseline and followed their participants for cardiovascular events like heart attack, stroke, coronary revascularization, and so on.

The biomarkers in question are really the big guns in this space: troponin, a marker of stress on the heart muscle; NT-proBNP, a marker of stretch on the heart muscle; and C-reactive protein, a marker of inflammation. In every case, higher levels of these markers at baseline were associated with a higher risk for cardiovascular disease in the future.

Troponin T, shown here, has a basically linear risk with subsequent cardiovascular disease.

provumislimimuvajisweshibrukadriuopreuukeuudouufridreshifrostetrodramase


BNP seems to demonstrate more of a threshold effect, where levels above 60 start to associate with problems.

thuciswowicrowochiwapucrujipricrivisijidiclebivepraspuspudreve


And CRP does a similar thing, with levels above 1.

drajicelacirufrehastokothoswuhewroprabakopheslipufrathebrathugur


All of these findings were statistically significant. If you have higher levels of one or more of these biomarkers, you are more likely to have cardiovascular disease in the future.

Of course, our old friend the pooled cohort risk equation is still here — in the background — requiring just that one blood test and measurement of blood pressure. Let’s talk about predictive power.

The pooled cohort risk equation score, in this study, had a C-statistic of 0.812.

By adding troponin, BNP, and CRP to the equation, the new C-statistic is 0.819. Barely any change.

guslubrapeswacluvusochowuswutawunetrarawribripotregashihetroslovorophamaspashestokaclugaspifrobrusochasowreprokulapros


Now, the authors looked at different types of prediction here. The greatest improvement in the AUC was seen when they tried to predict heart failure within 1 year of measurement; there the AUC improved by 0.04. But the presence of BNP as a biomarker and the short time window of 1 year makes me wonder whether this is really prediction at all or whether they were essentially just diagnosing people with existing heart failure.

 

 

Why does this happen? Why do these promising biomarkers, clearly associated with bad outcomes, fail to improve our ability to predict the future? I already gave one example, which has to do with how the markers are distributed in the population. But even more relevant here is that the new markers will only improve prediction insofar as they are not already represented in the old predictive model. 

Of course, BNP, for example, wasn’t in the old model. But smoking was. Diabetes was. Blood pressure was. All of that data might actually tell you something about the patient’s BNP through their mutual correlation. And improvement in prediction requires new information. 

This is actually why I consider this a really successful study. We need to do studies like this to help us find what those new sources of information might be. It doesn’t seem like these biomarkers will help us in our effort to risk-stratify people. So, we move on to other domains. Perhaps social determinants of health would improve risk prediction. Perhaps insurance status. Perhaps environmental exposures. Perhaps markers of stress.

We will never get to a C-statistic of 1. Perfect prediction is the domain of palm readers and astrophysicists. But better prediction is always possible through data. The big question, of course, is which data?
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>168083</fileName> <TBEID>0C0501B3.SIG</TBEID> <TBUniqueIdentifier>MD_0C0501B3</TBUniqueIdentifier> <newsOrJournal>News</newsOrJournal> <publisherName>Frontline Medical Communications</publisherName> <storyname/> <articleType>2</articleType> <TBLocation>QC Done-All Pubs</TBLocation> <QCDate>20240515T165822</QCDate> <firstPublished>20240515T165921</firstPublished> <LastPublished>20240515T165921</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20240515T165921</CMSDate> <articleSource/> <facebookInfo/> <meetingNumber/> <byline>F. Perry Wilson, MSCE, MD</byline> <bylineText>F. PERRY WILSON, MD, MSCE</bylineText> <bylineFull>F. PERRY WILSON, MD, MSCE</bylineFull> <bylineTitleText/> <USOrGlobal/> <wireDocType/> <newsDocType>News</newsDocType> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:imng"> <name>IMNG Medical Media</name> <rightsInfo> <copyrightHolder> <name>Frontline Medical News</name> </copyrightHolder> <copyrightNotice>Copyright (c) 2015 Frontline Medical News, a Frontline Medical Communications Inc. company. All rights reserved. This material may not be published, broadcast, copied, or otherwise reproduced or distributed without the prior written permission of Frontline Medical Communications Inc.</copyrightNotice> </rightsInfo> </provider> <abstract/> <metaDescription>It doesn’t seem like these biomarkers will help us in our effort to risk-stratify people. So, we move on to other domains. Perhaps social determinants of health</metaDescription> <articlePDF/> <teaserImage>301467</teaserImage> <teaser>Physician discusses recent study findings that cardiac biomarkers will not likely help risk-stratify patients.</teaser> <title>Why Cardiac Biomarkers Don’t Help Predict Heart Disease</title> <deck/> <disclaimer/> <AuthorList/> <articleURL/> <doi/> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear/> <pubPubdateMonth/> <pubPubdateDay/> <pubVolume/> <pubNumber/> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>card</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>chph</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>im</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>fp</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> </publications_g> <publications> <term canonical="true">5</term> <term>6</term> <term>21</term> <term>15</term> </publications> <sections> <term canonical="true">52</term> <term>39313</term> </sections> <topics> <term canonical="true">280</term> <term>236</term> <term>194</term> </topics> <links> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012955.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Dr. WIlson</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012950.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Dr. Wilson</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012956.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Dr. Wilson</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012953.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Dr. Wilson</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012954.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Dr. Wilson</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012952.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Dr. WIlson</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012951.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Dr. Wilson</description> </link> </links> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>Why Cardiac Biomarkers Don’t Help Predict Heart Disease</title> <deck/> </itemMeta> <itemContent> <p> <span class="Emphasis">This transcript has been edited for clarity. </span> </p> <p>It’s the counterintuitive stuff in epidemiology that always really interests me. One intuition many of us have is that if a risk factor is significantly associated with an outcome, knowledge of that risk factor would help to predict that outcome. Makes sense. Feels right.<br/><br/>But it’s not right. Not always.<br/><br/>Here’s a fake example to illustrate my point. Let’s say we have 10,000 individuals who we follow for 10 years and 2000 of them die. (It’s been a rough decade.) At baseline, I measured a novel biomarker, the Perry Factor, in everyone. To keep it simple, the Perry Factor has only two values: 0 or 1. <br/><br/>I then do a standard associational analysis and find that individuals who are positive for the Perry Factor have a 40-fold higher odds of death than those who are negative for it. I am beginning to reconsider ascribing my good name to this biomarker. This is a highly statistically significant result — a <span class="Emphasis">P</span> value &lt;.001. <br/><br/>Clearly, knowledge of the Perry Factor should help me predict who will die in the cohort. I evaluate predictive power using a metric called the area under the receiver operating characteristic curve (AUC, referred to as the C-statistic in time-to-event studies). It tells you, given two people — one who dies and one who doesn’t — how frequently you “pick” the right person, given the knowledge of their Perry Factor.<br/><br/>A C-statistic of 0.5, or 50%, would mean the Perry Factor gives you no better results than a coin flip; it’s chance. A C-statistic of 1 is perfect prediction. So, what will the C-statistic be, given the incredibly strong association of the Perry Factor with outcomes? 0.9? 0.95?<br/><br/>0.5024. Almost useless.<br/><br/>[[{"fid":"301467","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"Perry Factor","field_file_image_credit[und][0][value]":"Dr. WIlson","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>Let’s figure out why strength of association and usefulness for prediction are <span class="Emphasis">not</span> always the same thing.<br/><br/>I constructed my fake Perry Factor dataset quite carefully to illustrate this point. Let me show you what happened. What you see here is a breakdown of the patients in my fake study. You can see that just 11 of them were Perry Factor positive, but 10 of those 11 ended up dying.<br/><br/>[[{"fid":"301462","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"Behind the Fake Data","field_file_image_credit[und][0][value]":"Dr. Wilson","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]] <br/><br/>That’s quite unlikely by chance alone. It really does appear that if you have Perry Factor, your risk for death is much higher. But the reason that Perry Factor is a bad predictor is because it is so rare in the population. Sure, you can use it to correctly predict the outcome of 10 of the 11 people who have it, but the vast majority of people don’t have Perry Factor. It’s useless to distinguish who will die vs who will live in that population.<br/><br/>Why have I spent so much time trying to reverse our intuition that strength of association and strength of predictive power must be related? Because it helps to explain this paper, <span class="Hyperlink"><a href="https://jamanetwork.com/journals/jama/article-abstract/2818624">“Prognostic Value of Cardiovascular Biomarkers in the Population,”</a></span> appearing in <span class="Emphasis">JAMA</span>, which is a very nice piece of work trying to help us better predict cardiovascular disease.<br/><br/>I don’t need to tell you that cardiovascular disease is the number-one killer in this country and most of the world. I don’t need to tell you that we have really good preventive therapies and lifestyle interventions that can reduce the risk. But it would be nice to know in whom, specifically, we should use those interventions.<br/><br/><span class="Hyperlink">Cardiovascular risk</span> scores, to date, are pretty simple. The most common one in use in the United States, the <span class="Hyperlink"><a href="https://clincalc.com/Cardiology/ASCVD/PooledCohort.aspx">pooled cohort risk equation</a></span>, has nine variables, two of which require a cholesterol panel and one a blood pressure test. It’s easy and it’s pretty accurate.<br/><br/>[[{"fid":"301461","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"ASCVD Risk Calculator","field_file_image_credit[und][0][value]":"Dr. Wilson","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>Using the score from the pooled cohort risk calculator, you get a <span class="Hyperlink"><a href="https://www.ahajournals.org/doi/10.1161/01.cir.0000437741.48606.98">C-statistic as high as 0.82 when applied to Black women, a low of 0.71 when applied to Black men. Non-Black individuals are in the middle</a></span>. Not bad. But, clearly, not perfect.<br/><br/>And aren’t we in the era of big data, the era of personalized medicine? We have dozens, maybe hundreds, of quantifiable biomarkers that are associated with subsequent heart disease. Surely, by adding these biomarkers into the risk equation, we can improve prediction. Right?<br/><br/>The <span class="Emphasis">JAMA</span> study includes 164,054 patients pooled from 28 cohort studies from 12 countries. All the studies measured various key biomarkers at baseline and followed their participants for cardiovascular events like heart attack, <span class="Hyperlink">stroke</span>, coronary revascularization, and so on.<br/><br/>The biomarkers in question are really the big guns in this space: troponin, a marker of stress on the heart muscle; NT-proBNP, a marker of stretch on the heart muscle; and C-reactive protein, a marker of inflammation. In every case, higher levels of these markers at baseline were associated with a higher risk for cardiovascular disease in the future.<br/><br/>Troponin T, shown here, has a basically linear risk with subsequent cardiovascular disease.<br/><br/>[[{"fid":"301465","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"Troponin T","field_file_image_credit[und][0][value]":"Dr. Wilson","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>BNP seems to demonstrate more of a threshold effect, where levels above 60 start to associate with problems.<br/><br/>[[{"fid":"301466","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"BNP","field_file_image_credit[und][0][value]":"Dr. Wilson","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>And CRP does a similar thing, with levels above 1.<br/><br/>[[{"fid":"301464","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"CRP","field_file_image_credit[und][0][value]":"Dr. WIlson","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>All of these findings were statistically significant. If you have higher levels of one or more of these biomarkers, you are more likely to have cardiovascular disease in the future.<br/><br/>Of course, our old friend the pooled cohort risk equation is still here — in the background — requiring just that one blood test and measurement of blood pressure. Let’s talk about predictive power.<br/><br/>The pooled cohort risk equation score, in this study, had a C-statistic of 0.812.<br/><br/>By adding troponin, BNP, and CRP to the equation, the new C-statistic is 0.819. Barely any change.<br/><br/>[[{"fid":"301463","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"Effect of Adding More Data","field_file_image_credit[und][0][value]":"Dr. Wilson","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>Now, the authors looked at different types of prediction here. The greatest improvement in the AUC was seen when they tried to predict <span class="Hyperlink"><a href="https://emedicine.medscape.com/article/163062-overview">heart failure</a></span> within 1 year of measurement; there the AUC improved by 0.04. But the presence of BNP as a biomarker and the short time window of 1 year makes me wonder whether this is really prediction at all or whether they were essentially just diagnosing people with existing heart failure.</p> <p>Why does this happen? Why do these promising biomarkers, clearly associated with bad outcomes, fail to improve our ability to predict the future? I already gave one example, which has to do with how the markers are distributed in the population. But even more relevant here is that the new markers will only improve prediction insofar as they are not already represented in the old predictive model. <br/><br/>Of course, BNP, for example, wasn’t in the old model. But smoking was. Diabetes was. Blood pressure was. All of that data might actually tell you something about the patient’s BNP through their mutual correlation. And improvement in prediction requires <span class="Emphasis">new</span> information. <br/><br/>This is actually why I consider this a really successful study. We need to do studies like this to help us find what those new sources of information might be. <span class="tag metaDescription">It doesn’t seem like these biomarkers will help us in our effort to risk-stratify people. So, we move on to other domains. Perhaps social determinants of health would improve risk prediction. Perhaps insurance status. Perhaps environmental exposures. Perhaps markers of stress.</span><br/><br/>We will never get to a C-statistic of 1. Perfect prediction is the domain of palm readers and astrophysicists. But better prediction is always possible through data. The big question, of course, is <span class="Emphasis">which</span> data?<br/><br/></p> <p> <em>Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.</em> </p> <p> <em>A version of this article appeared on <span class="Hyperlink"><a href="https://www.medscape.com/viewarticle/why-cardiac-biomarkers-dont-help-predict-heart-disease-2024a1000938">Medscape.com</a></span>.</em> </p> </itemContent> </newsItem> <newsItem> <itemMeta> <itemRole>teaser</itemRole> <itemClass>text</itemClass> <title/> <deck/> </itemMeta> <itemContent> </itemContent> </newsItem> </itemSet></root>
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

It Would Be Nice if Olive Oil Really Did Prevent Dementia

Article Type
Changed
Tue, 05/14/2024 - 10:03

This transcript has been edited for clarity.

As you all know by now, I’m always looking out for lifestyle changes that are both pleasurable and healthy. They are hard to find, especially when it comes to diet. My kids complain about this all the time: “When you say ‘healthy food,’ you just mean yucky food.” And yes, French fries are amazing, and no, we can’t have them three times a day.

So, when I saw an article claiming that olive oil reduces the risk for dementia, I was interested. I love olive oil; I cook with it all the time. But as is always the case in the world of nutritional epidemiology, we need to be careful. There are a lot of reasons to doubt the results of this study — and one reason to believe it’s true.

The study I’m talking about is “Consumption of Olive Oil and Diet Quality and Risk of Dementia-Related Death,” appearing in JAMA Network Open and following a well-trod formula in the nutritional epidemiology space.

Nearly 100,000 participants, all healthcare workers, filled out a food frequency questionnaire every 4 years with 130 questions touching on all aspects of diet: How often do you eat bananas, bacon, olive oil? Participants were followed for more than 20 years, and if they died, the cause of death was flagged as being dementia-related or not. Over that time frame there were around 38,000 deaths, of which 4751 were due to dementia.

The rest is just statistics. The authors show that those who reported consuming more olive oil were less likely to die from dementia — about 50% less likely, if you compare those who reported eating more than 7 grams of olive oil a day with those who reported eating none.
 

Is It What You Eat, or What You Don’t Eat?

And we could stop there if we wanted to; I’m sure big olive oil would be happy with that. Is there such a thing as “big olive oil”? But no, we need to dig deeper here because this study has the same problems as all nutritional epidemiology studies. Number one, no one is sitting around drinking small cups of olive oil. They consume it with other foods. And it was clear from the food frequency questionnaire that people who consumed more olive oil also consumed less red meat, more fruits and vegetables, more whole grains, more butter, and less margarine. And those are just the findings reported in the paper. I suspect that people who eat more olive oil also eat more tomatoes, for example, though data this granular aren’t shown. So, it can be really hard, in studies like this, to know for sure that it’s actually the olive oil that is helpful rather than some other constituent in the diet.

The flip side of that coin presents another issue. The food you eat is also a marker of the food you don’t eat. People who ate olive oil consumed less margarine, for example. At the time of this study, margarine was still adulterated with trans-fats, which a pretty solid evidence base suggests are really bad for your vascular system. So perhaps it’s not that olive oil is particularly good for you but that something else is bad for you. In other words, simply adding olive oil to your diet without changing anything else may not do anything.

The other major problem with studies of this sort is that people don’t consume food at random. The type of person who eats a lot of olive oil is simply different from the type of person who doesn›t. For one thing, olive oil is expensive. A 25-ounce bottle of olive oil is on sale at my local supermarket right now for $11.00. A similar-sized bottle of vegetable oil goes for $4.00.

Isn’t it interesting that food that costs more money tends to be associated with better health outcomes? (I’m looking at you, red wine.) Perhaps it’s not the food; perhaps it’s the money. We aren’t provided data on household income in this study, but we can see that the heavy olive oil users were less likely to be current smokers and they got more physical activity.

Now, the authors are aware of these limitations and do their best to account for them. In multivariable models, they adjust for other stuff in the diet, and even for income (sort of; they use census tract as a proxy for income, which is really a broad brush), and still find a significant though weakened association showing a protective effect of olive oil on dementia-related death. But still — adjustment is never perfect, and the small effect size here could definitely be due to residual confounding.
 

 

 

Evidence More Convincing

Now, I did tell you that there is one reason to believe that this study is true, but it’s not really from this study.

It’s from the PREDIMED randomized trial.

This is nutritional epidemiology I can get behind. Published in 2018, investigators in Spain randomized around 7500 participants to receive a liter of olive oil once a week vs mixed nuts, vs small nonfood gifts, the idea here being that if you have olive oil around, you’ll use it more. And people who were randomly assigned to get the olive oil had a 30% lower rate of cardiovascular events. A secondary analysis of that study found that the rate of development of mild cognitive impairment was 65% lower in those who were randomly assigned to olive oil. That’s an impressive result.

So, there might be something to this olive oil thing, but I’m not quite ready to add it to my “pleasurable things that are still good for you” list just yet. Though it does make me wonder: Can we make French fries in the stuff?
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity.

As you all know by now, I’m always looking out for lifestyle changes that are both pleasurable and healthy. They are hard to find, especially when it comes to diet. My kids complain about this all the time: “When you say ‘healthy food,’ you just mean yucky food.” And yes, French fries are amazing, and no, we can’t have them three times a day.

So, when I saw an article claiming that olive oil reduces the risk for dementia, I was interested. I love olive oil; I cook with it all the time. But as is always the case in the world of nutritional epidemiology, we need to be careful. There are a lot of reasons to doubt the results of this study — and one reason to believe it’s true.

The study I’m talking about is “Consumption of Olive Oil and Diet Quality and Risk of Dementia-Related Death,” appearing in JAMA Network Open and following a well-trod formula in the nutritional epidemiology space.

Nearly 100,000 participants, all healthcare workers, filled out a food frequency questionnaire every 4 years with 130 questions touching on all aspects of diet: How often do you eat bananas, bacon, olive oil? Participants were followed for more than 20 years, and if they died, the cause of death was flagged as being dementia-related or not. Over that time frame there were around 38,000 deaths, of which 4751 were due to dementia.

The rest is just statistics. The authors show that those who reported consuming more olive oil were less likely to die from dementia — about 50% less likely, if you compare those who reported eating more than 7 grams of olive oil a day with those who reported eating none.
 

Is It What You Eat, or What You Don’t Eat?

And we could stop there if we wanted to; I’m sure big olive oil would be happy with that. Is there such a thing as “big olive oil”? But no, we need to dig deeper here because this study has the same problems as all nutritional epidemiology studies. Number one, no one is sitting around drinking small cups of olive oil. They consume it with other foods. And it was clear from the food frequency questionnaire that people who consumed more olive oil also consumed less red meat, more fruits and vegetables, more whole grains, more butter, and less margarine. And those are just the findings reported in the paper. I suspect that people who eat more olive oil also eat more tomatoes, for example, though data this granular aren’t shown. So, it can be really hard, in studies like this, to know for sure that it’s actually the olive oil that is helpful rather than some other constituent in the diet.

The flip side of that coin presents another issue. The food you eat is also a marker of the food you don’t eat. People who ate olive oil consumed less margarine, for example. At the time of this study, margarine was still adulterated with trans-fats, which a pretty solid evidence base suggests are really bad for your vascular system. So perhaps it’s not that olive oil is particularly good for you but that something else is bad for you. In other words, simply adding olive oil to your diet without changing anything else may not do anything.

The other major problem with studies of this sort is that people don’t consume food at random. The type of person who eats a lot of olive oil is simply different from the type of person who doesn›t. For one thing, olive oil is expensive. A 25-ounce bottle of olive oil is on sale at my local supermarket right now for $11.00. A similar-sized bottle of vegetable oil goes for $4.00.

Isn’t it interesting that food that costs more money tends to be associated with better health outcomes? (I’m looking at you, red wine.) Perhaps it’s not the food; perhaps it’s the money. We aren’t provided data on household income in this study, but we can see that the heavy olive oil users were less likely to be current smokers and they got more physical activity.

Now, the authors are aware of these limitations and do their best to account for them. In multivariable models, they adjust for other stuff in the diet, and even for income (sort of; they use census tract as a proxy for income, which is really a broad brush), and still find a significant though weakened association showing a protective effect of olive oil on dementia-related death. But still — adjustment is never perfect, and the small effect size here could definitely be due to residual confounding.
 

 

 

Evidence More Convincing

Now, I did tell you that there is one reason to believe that this study is true, but it’s not really from this study.

It’s from the PREDIMED randomized trial.

This is nutritional epidemiology I can get behind. Published in 2018, investigators in Spain randomized around 7500 participants to receive a liter of olive oil once a week vs mixed nuts, vs small nonfood gifts, the idea here being that if you have olive oil around, you’ll use it more. And people who were randomly assigned to get the olive oil had a 30% lower rate of cardiovascular events. A secondary analysis of that study found that the rate of development of mild cognitive impairment was 65% lower in those who were randomly assigned to olive oil. That’s an impressive result.

So, there might be something to this olive oil thing, but I’m not quite ready to add it to my “pleasurable things that are still good for you” list just yet. Though it does make me wonder: Can we make French fries in the stuff?
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity.

As you all know by now, I’m always looking out for lifestyle changes that are both pleasurable and healthy. They are hard to find, especially when it comes to diet. My kids complain about this all the time: “When you say ‘healthy food,’ you just mean yucky food.” And yes, French fries are amazing, and no, we can’t have them three times a day.

So, when I saw an article claiming that olive oil reduces the risk for dementia, I was interested. I love olive oil; I cook with it all the time. But as is always the case in the world of nutritional epidemiology, we need to be careful. There are a lot of reasons to doubt the results of this study — and one reason to believe it’s true.

The study I’m talking about is “Consumption of Olive Oil and Diet Quality and Risk of Dementia-Related Death,” appearing in JAMA Network Open and following a well-trod formula in the nutritional epidemiology space.

Nearly 100,000 participants, all healthcare workers, filled out a food frequency questionnaire every 4 years with 130 questions touching on all aspects of diet: How often do you eat bananas, bacon, olive oil? Participants were followed for more than 20 years, and if they died, the cause of death was flagged as being dementia-related or not. Over that time frame there were around 38,000 deaths, of which 4751 were due to dementia.

The rest is just statistics. The authors show that those who reported consuming more olive oil were less likely to die from dementia — about 50% less likely, if you compare those who reported eating more than 7 grams of olive oil a day with those who reported eating none.
 

Is It What You Eat, or What You Don’t Eat?

And we could stop there if we wanted to; I’m sure big olive oil would be happy with that. Is there such a thing as “big olive oil”? But no, we need to dig deeper here because this study has the same problems as all nutritional epidemiology studies. Number one, no one is sitting around drinking small cups of olive oil. They consume it with other foods. And it was clear from the food frequency questionnaire that people who consumed more olive oil also consumed less red meat, more fruits and vegetables, more whole grains, more butter, and less margarine. And those are just the findings reported in the paper. I suspect that people who eat more olive oil also eat more tomatoes, for example, though data this granular aren’t shown. So, it can be really hard, in studies like this, to know for sure that it’s actually the olive oil that is helpful rather than some other constituent in the diet.

The flip side of that coin presents another issue. The food you eat is also a marker of the food you don’t eat. People who ate olive oil consumed less margarine, for example. At the time of this study, margarine was still adulterated with trans-fats, which a pretty solid evidence base suggests are really bad for your vascular system. So perhaps it’s not that olive oil is particularly good for you but that something else is bad for you. In other words, simply adding olive oil to your diet without changing anything else may not do anything.

The other major problem with studies of this sort is that people don’t consume food at random. The type of person who eats a lot of olive oil is simply different from the type of person who doesn›t. For one thing, olive oil is expensive. A 25-ounce bottle of olive oil is on sale at my local supermarket right now for $11.00. A similar-sized bottle of vegetable oil goes for $4.00.

Isn’t it interesting that food that costs more money tends to be associated with better health outcomes? (I’m looking at you, red wine.) Perhaps it’s not the food; perhaps it’s the money. We aren’t provided data on household income in this study, but we can see that the heavy olive oil users were less likely to be current smokers and they got more physical activity.

Now, the authors are aware of these limitations and do their best to account for them. In multivariable models, they adjust for other stuff in the diet, and even for income (sort of; they use census tract as a proxy for income, which is really a broad brush), and still find a significant though weakened association showing a protective effect of olive oil on dementia-related death. But still — adjustment is never perfect, and the small effect size here could definitely be due to residual confounding.
 

 

 

Evidence More Convincing

Now, I did tell you that there is one reason to believe that this study is true, but it’s not really from this study.

It’s from the PREDIMED randomized trial.

This is nutritional epidemiology I can get behind. Published in 2018, investigators in Spain randomized around 7500 participants to receive a liter of olive oil once a week vs mixed nuts, vs small nonfood gifts, the idea here being that if you have olive oil around, you’ll use it more. And people who were randomly assigned to get the olive oil had a 30% lower rate of cardiovascular events. A secondary analysis of that study found that the rate of development of mild cognitive impairment was 65% lower in those who were randomly assigned to olive oil. That’s an impressive result.

So, there might be something to this olive oil thing, but I’m not quite ready to add it to my “pleasurable things that are still good for you” list just yet. Though it does make me wonder: Can we make French fries in the stuff?
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>167996</fileName> <TBEID>0C050009.SIG</TBEID> <TBUniqueIdentifier>MD_0C050009</TBUniqueIdentifier> <newsOrJournal>News</newsOrJournal> <publisherName>Frontline Medical Communications</publisherName> <storyname>Olive Oil Dementia</storyname> <articleType>353</articleType> <TBLocation>QC Done-All Pubs</TBLocation> <QCDate>20240514T095026</QCDate> <firstPublished>20240514T095936</firstPublished> <LastPublished>20240514T095936</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20240514T095936</CMSDate> <articleSource/> <facebookInfo/> <meetingNumber/> <byline>F. Perry Wilson, MD, MSCE</byline> <bylineText>F. PERRY WILSON, MD, MSCE</bylineText> <bylineFull>F. PERRY WILSON, MD, MSCE</bylineFull> <bylineTitleText/> <USOrGlobal/> <wireDocType/> <newsDocType>Opinion</newsDocType> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:imng"> <name>IMNG Medical Media</name> <rightsInfo> <copyrightHolder> <name>Frontline Medical News</name> </copyrightHolder> <copyrightNotice>Copyright (c) 2015 Frontline Medical News, a Frontline Medical Communications Inc. company. All rights reserved. This material may not be published, broadcast, copied, or otherwise reproduced or distributed without the prior written permission of Frontline Medical Communications Inc.</copyrightNotice> </rightsInfo> </provider> <abstract/> <metaDescription>Does olive oil prevent dementia? Let’s weigh the evidence.</metaDescription> <articlePDF/> <teaserImage/> <teaser> <span class="tag metaDescription">Does olive oil prevent dementia? Let’s weigh the evidence.</span> </teaser> <title>It Would Be Nice if Olive Oil Really Did Prevent Dementia</title> <deck/> <disclaimer/> <AuthorList/> <articleURL/> <doi/> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear>2024</pubPubdateYear> <pubPubdateMonth/> <pubPubdateDay/> <pubVolume/> <pubNumber/> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>nr</publicationCode> <pubIssueName>January 2021</pubIssueName> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> <journalTitle>Neurology Reviews</journalTitle> <journalFullTitle>Neurology Reviews</journalFullTitle> <copyrightStatement>2018 Frontline Medical Communications Inc.,</copyrightStatement> </publicationData> <publicationData> <publicationCode>IM</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> <journalTitle/> <journalFullTitle/> <copyrightStatement/> </publicationData> <publicationData> <publicationCode>FP</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> <journalTitle/> <journalFullTitle/> <copyrightStatement>Copyright 2017 Frontline Medical News</copyrightStatement> </publicationData> <publicationData> <publicationCode>CPN</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> <journalTitle/> <journalFullTitle/> <copyrightStatement/> </publicationData> <publicationData> <publicationCode>CARD</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> <journalTitle/> <journalFullTitle>Cardiology news</journalFullTitle> <copyrightStatement/> </publicationData> </publications_g> <publications> <term canonical="true">22</term> <term>21</term> <term>15</term> <term>9</term> <term>5</term> </publications> <sections> <term canonical="true">41022</term> <term>39313</term> </sections> <topics> <term canonical="true">180</term> <term>258</term> <term>49620</term> <term>280</term> </topics> <links/> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>It Would Be Nice if Olive Oil Really Did Prevent Dementia</title> <deck/> </itemMeta> <itemContent> <p><em>This transcript has been edited for clarity</em>.<br/><br/>As you all know by now, I’m always looking out for lifestyle changes that are both pleasurable and healthy. They are hard to find, especially when it comes to diet. My kids complain about this all the time: “When you say ‘healthy food,’ you just mean yucky food.” And yes, French fries are amazing, and no, we can’t have them three times a day.<br/><br/>So, when I saw an article claiming that olive oil reduces the risk for dementia, I was interested. I love olive oil; I cook with it all the time. But as is always the case in the world of nutritional epidemiology, we need to be careful. There are a lot of reasons to doubt the results of this study — and one reason to believe it’s true.<br/><br/>The study I’m talking about is <a href="https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2818362?utm_source=For_The_Media&amp;utm_medium=referral&amp;utm_campaign=ftm_links&amp;utm_term=050624">“Consumption of Olive Oil and Diet Quality and Risk of Dementia-Related Death,”</a> appearing in <em>JAMA Network Open</em> and following a well-trod formula in the nutritional epidemiology space.<br/><br/>Nearly 100,000 participants, all healthcare workers, filled out a food frequency questionnaire every 4 years with 130 questions touching on all aspects of diet: How often do you eat bananas, bacon, olive oil? Participants were followed for more than 20 years, and if they died, the cause of death was flagged as being dementia-related or not. Over that time frame there were around 38,000 deaths, of which 4751 were due to dementia.<br/><br/>The rest is just statistics. The authors show that those who reported consuming more olive oil were less likely to die from dementia — about 50% less likely, if you compare those who reported eating more than 7 grams of olive oil a day with those who reported eating none.<br/><br/></p> <h2>Is It What You Eat, or What You Don’t Eat?</h2> <p>And we could stop there if we wanted to; I’m sure big olive oil would be happy with that. Is there such a thing as “big olive oil”? But no, we need to dig deeper here because this study has the same problems as all nutritional epidemiology studies. Number one, no one is sitting around drinking small cups of olive oil. They consume it with other foods. And it was clear from the food frequency questionnaire that people who consumed more olive oil also consumed less red meat, more fruits and vegetables, more whole grains, more butter, and less margarine. And those are just the findings reported in the paper. I suspect that people who eat more olive oil also eat more tomatoes, for example, though data this granular aren’t shown. So, it can be really hard, in studies like this, to know for sure that it’s actually the olive oil that is helpful rather than some other constituent in the diet.</p> <p>The flip side of that coin presents another issue. The food you eat is also a marker of the food you don’t eat. People who ate olive oil consumed less margarine, for example. At the time of this study, margarine was still adulterated with trans-fats, which a pretty solid evidence base suggests <span class="Hyperlink"><a href="https://www.sciencedirect.com/science/article/pii/S000282230902094X">are really bad for your vascular system</a></span>. So perhaps it’s not that olive oil is particularly good for you but that something else is bad for you. In other words, simply adding olive oil to your diet without changing anything else may not do anything.<br/><br/>The other major problem with studies of this sort is that people don’t consume food at random. The <span class="Emphasis">type</span> of person who eats a lot of olive oil is simply different from the type of person who doesn›t. For one thing, olive oil is expensive. A 25-ounce bottle of olive oil is on sale at my local supermarket right now for $11.00. A similar-sized bottle of vegetable oil goes for $4.00.<br/><br/>Isn’t it interesting that food that costs more money tends to be associated with better health outcomes? (I’m looking at you, red wine.) Perhaps it’s not the food; perhaps it’s the money. We aren’t provided data on household income in this study, but we can see that the heavy olive oil users were less likely to be current smokers and they got more physical activity.<br/><br/>Now, the authors are aware of these limitations and do their best to account for them. In multivariable models, they adjust for other stuff in the diet, and even for income (sort of; they use census tract as a proxy for income, which is really a broad brush), and still find a significant though weakened association showing a protective effect of olive oil on dementia-related death. But still — adjustment is never perfect, and the small effect size here could definitely be due to residual confounding.<br/><br/></p> <h2>Evidence More Convincing</h2> <p>Now, I did tell you that there is one reason to believe that this study is true, but it’s not really from this study.</p> <p>It’s from the <span class="Hyperlink"><a href="https://www.nejm.org/doi/full/10.1056/NEJMoa1800389">PREDIMED randomized trial</a></span>.<br/><br/>This is nutritional epidemiology I can get behind. Published in 2018, investigators in Spain randomized around 7500 participants to receive a liter of olive oil once a week vs mixed nuts, vs small nonfood gifts, the idea here being that if you have olive oil around, you’ll use it more. And people who were randomly assigned to get the olive oil had a 30% lower rate of cardiovascular events. A secondary analysis of that study found that the rate of development of <span class="Hyperlink">mild cognitive impairment</span> was 65% lower in those who were randomly assigned to olive oil. That’s an impressive result.<br/><br/>So, there might be something to this olive oil thing, but I’m not quite ready to add it to my “pleasurable things that are still good for you” list just yet. Though it does make me wonder: Can we make French fries in the stuff?<br/><br/></p> <p> <em> <span class="Emphasis">Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships. </span> </em> </p> <p> <em>A version of this article appeared on <span class="Hyperlink"><a href="https://www.medscape.com/viewarticle/1000781">Medscape.com</a></span>.</em> </p> </itemContent> </newsItem> <newsItem> <itemMeta> <itemRole>teaser</itemRole> <itemClass>text</itemClass> <title/> <deck/> </itemMeta> <itemContent> </itemContent> </newsItem> </itemSet></root>
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Intermittent Fasting + HIIT: Fitness Fad or Fix?

Article Type
Changed
Thu, 05/09/2024 - 13:35

Let’s be honest: Although as physicians we have the power of the prescription pad, so much of health, in the end, comes down to lifestyle. Of course, taking a pill is often way easier than changing your longstanding habits. And what’s worse, doesn’t it always seem like the lifestyle stuff that is good for your health is unpleasant?

Two recent lifestyle interventions that I have tried and find really not enjoyable are time-restricted eating (also known as intermittent fasting) and high-intensity interval training, or HIIT. The former leaves me hangry for half the day; the latter is, well, it’s just really hard compared with my usual jog.

Wilson_F_Perry_web.jpg
Dr. F. Perry Wilson

But given the rule of unpleasant lifestyle changes, I knew as soon as I saw this recent study what the result would be. What if we combined time-restricted eating with high-intensity interval training?

I’m referring to this study, appearing in PLOS ONE from Ranya Ameur and colleagues, which is a small trial that enrolled otherwise healthy women with a BMI > 30 and randomized them to one of three conditions.

First was time-restricted eating. Women in this group could eat whatever they wanted, but only from 8 a.m. to 4 p.m. daily.

Second: high-intensity functional training. This is a variant of high-intensity interval training which focuses a bit more on resistance exercise than on pure cardiovascular stuff but has the same vibe of doing brief bursts of intensive activity followed by a cool-down period.

Third: a combination of the two. Sounds rough to me.

The study was otherwise straightforward. At baseline, researchers collected data on body composition and dietary intake, and measured blood pressure, glucose, insulin, and lipid biomarkers. A 12-week intervention period followed, after which all of this stuff was measured again.

Now, you may have noticed that there is no control group in this study. We’ll come back to that — a few times.

Let me walk you through some of the outcomes here.

First off, body composition metrics. All three groups lost weight — on average, around 10% of body weight which, for a 12-week intervention, is fairly impressive. BMI and waist circumference went down as well, and, interestingly, much of the weight loss here was in fat mass, not fat-free mass.

Most interventions that lead to weight loss — and I’m including some of the newer drugs here — lead to both fat and muscle loss. That might not be as bad as it sounds; the truth is that muscle mass increases as fat increases because of the simple fact that if you’re carrying more weight when you walk around, your leg muscles get bigger. But to preserve muscle mass in the face of fat loss is sort of a Goldilocks finding, and, based on these results, there’s a suggestion that the high-intensity functional training helps to do just that.

The dietary intake findings were really surprising to me. Across the board, caloric intake decreased. It’s no surprise that time-restricted eating reduces calorie intake. That has been shown many times before and is probably the main reason it induces weight loss — less time to eat means you eat less.

But why would high-intensity functional training lead to lower caloric intake? Most people, myself included, get hungry after they exercise. In fact, one of the reasons it’s hard to lose weight with exercise alone is that we end up eating more calories to make up for what we lost during the exercise. This calorie reduction could be a unique effect of this type of exercise, but honestly this could also be something called the Hawthorne effect. Women in the study kept a food diary to track their intake, and the act of doing that might actually make you eat less. It makes it a little more annoying to snack a bit if you know you have to write it down. This is a situation where I would kill for a control group.

The lipid findings are also pretty striking, with around a 40% reduction in LDL across the board, and evidence of synergistic effects of combined TRE and high-intensity training on total cholesterol and triglycerides. This is like a statin level of effect — pretty impressive. Again, my heart pines for a control group, though.

Same story with glucose and insulin measures: an impressive reduction in fasting glucose and good evidence that the combination of time-restricted eating and high-intensity functional training reduces insulin levels and HOMA-IR as well.

Really the only thing that wasn’t very impressive was the change in blood pressure, with only modest decreases across the board.

Okay, so let’s take a breath after this high-intensity cerebral workout and put this all together. This was a small study, lacking a control group, but with large effect sizes in very relevant clinical areas. It confirms what we know about time-restricted eating — that it makes you eat less calories — and introduces the potential that vigorous exercise can not only magnify the benefits of time-restricted eating but maybe even mitigate some of the risks, like the risk for muscle loss. And of course, it comports with my central hypothesis, which is that the more unpleasant a lifestyle intervention is, the better it is for you. No pain, no gain, right?

Of course, I am being overly dogmatic. There are plenty of caveats. Wrestling bears is quite unpleasant and almost certainly bad for you. And there are even some pleasant things that are pretty good for you — like coffee and sex. And there are even people who find time-restricted eating and high-intensity training pleasurable. They are called masochists.

I’m joking. The truth is that any lifestyle change is hard, but with persistence the changes become habits and, eventually, those habits do become pleasurable. Or, at least, much less painful. The trick is getting over the hump of change. If only there were a pill for that.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships. This transcript has been edited for clarity.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Let’s be honest: Although as physicians we have the power of the prescription pad, so much of health, in the end, comes down to lifestyle. Of course, taking a pill is often way easier than changing your longstanding habits. And what’s worse, doesn’t it always seem like the lifestyle stuff that is good for your health is unpleasant?

Two recent lifestyle interventions that I have tried and find really not enjoyable are time-restricted eating (also known as intermittent fasting) and high-intensity interval training, or HIIT. The former leaves me hangry for half the day; the latter is, well, it’s just really hard compared with my usual jog.

Wilson_F_Perry_web.jpg
Dr. F. Perry Wilson

But given the rule of unpleasant lifestyle changes, I knew as soon as I saw this recent study what the result would be. What if we combined time-restricted eating with high-intensity interval training?

I’m referring to this study, appearing in PLOS ONE from Ranya Ameur and colleagues, which is a small trial that enrolled otherwise healthy women with a BMI > 30 and randomized them to one of three conditions.

First was time-restricted eating. Women in this group could eat whatever they wanted, but only from 8 a.m. to 4 p.m. daily.

Second: high-intensity functional training. This is a variant of high-intensity interval training which focuses a bit more on resistance exercise than on pure cardiovascular stuff but has the same vibe of doing brief bursts of intensive activity followed by a cool-down period.

Third: a combination of the two. Sounds rough to me.

The study was otherwise straightforward. At baseline, researchers collected data on body composition and dietary intake, and measured blood pressure, glucose, insulin, and lipid biomarkers. A 12-week intervention period followed, after which all of this stuff was measured again.

Now, you may have noticed that there is no control group in this study. We’ll come back to that — a few times.

Let me walk you through some of the outcomes here.

First off, body composition metrics. All three groups lost weight — on average, around 10% of body weight which, for a 12-week intervention, is fairly impressive. BMI and waist circumference went down as well, and, interestingly, much of the weight loss here was in fat mass, not fat-free mass.

Most interventions that lead to weight loss — and I’m including some of the newer drugs here — lead to both fat and muscle loss. That might not be as bad as it sounds; the truth is that muscle mass increases as fat increases because of the simple fact that if you’re carrying more weight when you walk around, your leg muscles get bigger. But to preserve muscle mass in the face of fat loss is sort of a Goldilocks finding, and, based on these results, there’s a suggestion that the high-intensity functional training helps to do just that.

The dietary intake findings were really surprising to me. Across the board, caloric intake decreased. It’s no surprise that time-restricted eating reduces calorie intake. That has been shown many times before and is probably the main reason it induces weight loss — less time to eat means you eat less.

But why would high-intensity functional training lead to lower caloric intake? Most people, myself included, get hungry after they exercise. In fact, one of the reasons it’s hard to lose weight with exercise alone is that we end up eating more calories to make up for what we lost during the exercise. This calorie reduction could be a unique effect of this type of exercise, but honestly this could also be something called the Hawthorne effect. Women in the study kept a food diary to track their intake, and the act of doing that might actually make you eat less. It makes it a little more annoying to snack a bit if you know you have to write it down. This is a situation where I would kill for a control group.

The lipid findings are also pretty striking, with around a 40% reduction in LDL across the board, and evidence of synergistic effects of combined TRE and high-intensity training on total cholesterol and triglycerides. This is like a statin level of effect — pretty impressive. Again, my heart pines for a control group, though.

Same story with glucose and insulin measures: an impressive reduction in fasting glucose and good evidence that the combination of time-restricted eating and high-intensity functional training reduces insulin levels and HOMA-IR as well.

Really the only thing that wasn’t very impressive was the change in blood pressure, with only modest decreases across the board.

Okay, so let’s take a breath after this high-intensity cerebral workout and put this all together. This was a small study, lacking a control group, but with large effect sizes in very relevant clinical areas. It confirms what we know about time-restricted eating — that it makes you eat less calories — and introduces the potential that vigorous exercise can not only magnify the benefits of time-restricted eating but maybe even mitigate some of the risks, like the risk for muscle loss. And of course, it comports with my central hypothesis, which is that the more unpleasant a lifestyle intervention is, the better it is for you. No pain, no gain, right?

Of course, I am being overly dogmatic. There are plenty of caveats. Wrestling bears is quite unpleasant and almost certainly bad for you. And there are even some pleasant things that are pretty good for you — like coffee and sex. And there are even people who find time-restricted eating and high-intensity training pleasurable. They are called masochists.

I’m joking. The truth is that any lifestyle change is hard, but with persistence the changes become habits and, eventually, those habits do become pleasurable. Or, at least, much less painful. The trick is getting over the hump of change. If only there were a pill for that.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships. This transcript has been edited for clarity.

A version of this article appeared on Medscape.com.

Let’s be honest: Although as physicians we have the power of the prescription pad, so much of health, in the end, comes down to lifestyle. Of course, taking a pill is often way easier than changing your longstanding habits. And what’s worse, doesn’t it always seem like the lifestyle stuff that is good for your health is unpleasant?

Two recent lifestyle interventions that I have tried and find really not enjoyable are time-restricted eating (also known as intermittent fasting) and high-intensity interval training, or HIIT. The former leaves me hangry for half the day; the latter is, well, it’s just really hard compared with my usual jog.

Wilson_F_Perry_web.jpg
Dr. F. Perry Wilson

But given the rule of unpleasant lifestyle changes, I knew as soon as I saw this recent study what the result would be. What if we combined time-restricted eating with high-intensity interval training?

I’m referring to this study, appearing in PLOS ONE from Ranya Ameur and colleagues, which is a small trial that enrolled otherwise healthy women with a BMI > 30 and randomized them to one of three conditions.

First was time-restricted eating. Women in this group could eat whatever they wanted, but only from 8 a.m. to 4 p.m. daily.

Second: high-intensity functional training. This is a variant of high-intensity interval training which focuses a bit more on resistance exercise than on pure cardiovascular stuff but has the same vibe of doing brief bursts of intensive activity followed by a cool-down period.

Third: a combination of the two. Sounds rough to me.

The study was otherwise straightforward. At baseline, researchers collected data on body composition and dietary intake, and measured blood pressure, glucose, insulin, and lipid biomarkers. A 12-week intervention period followed, after which all of this stuff was measured again.

Now, you may have noticed that there is no control group in this study. We’ll come back to that — a few times.

Let me walk you through some of the outcomes here.

First off, body composition metrics. All three groups lost weight — on average, around 10% of body weight which, for a 12-week intervention, is fairly impressive. BMI and waist circumference went down as well, and, interestingly, much of the weight loss here was in fat mass, not fat-free mass.

Most interventions that lead to weight loss — and I’m including some of the newer drugs here — lead to both fat and muscle loss. That might not be as bad as it sounds; the truth is that muscle mass increases as fat increases because of the simple fact that if you’re carrying more weight when you walk around, your leg muscles get bigger. But to preserve muscle mass in the face of fat loss is sort of a Goldilocks finding, and, based on these results, there’s a suggestion that the high-intensity functional training helps to do just that.

The dietary intake findings were really surprising to me. Across the board, caloric intake decreased. It’s no surprise that time-restricted eating reduces calorie intake. That has been shown many times before and is probably the main reason it induces weight loss — less time to eat means you eat less.

But why would high-intensity functional training lead to lower caloric intake? Most people, myself included, get hungry after they exercise. In fact, one of the reasons it’s hard to lose weight with exercise alone is that we end up eating more calories to make up for what we lost during the exercise. This calorie reduction could be a unique effect of this type of exercise, but honestly this could also be something called the Hawthorne effect. Women in the study kept a food diary to track their intake, and the act of doing that might actually make you eat less. It makes it a little more annoying to snack a bit if you know you have to write it down. This is a situation where I would kill for a control group.

The lipid findings are also pretty striking, with around a 40% reduction in LDL across the board, and evidence of synergistic effects of combined TRE and high-intensity training on total cholesterol and triglycerides. This is like a statin level of effect — pretty impressive. Again, my heart pines for a control group, though.

Same story with glucose and insulin measures: an impressive reduction in fasting glucose and good evidence that the combination of time-restricted eating and high-intensity functional training reduces insulin levels and HOMA-IR as well.

Really the only thing that wasn’t very impressive was the change in blood pressure, with only modest decreases across the board.

Okay, so let’s take a breath after this high-intensity cerebral workout and put this all together. This was a small study, lacking a control group, but with large effect sizes in very relevant clinical areas. It confirms what we know about time-restricted eating — that it makes you eat less calories — and introduces the potential that vigorous exercise can not only magnify the benefits of time-restricted eating but maybe even mitigate some of the risks, like the risk for muscle loss. And of course, it comports with my central hypothesis, which is that the more unpleasant a lifestyle intervention is, the better it is for you. No pain, no gain, right?

Of course, I am being overly dogmatic. There are plenty of caveats. Wrestling bears is quite unpleasant and almost certainly bad for you. And there are even some pleasant things that are pretty good for you — like coffee and sex. And there are even people who find time-restricted eating and high-intensity training pleasurable. They are called masochists.

I’m joking. The truth is that any lifestyle change is hard, but with persistence the changes become habits and, eventually, those habits do become pleasurable. Or, at least, much less painful. The trick is getting over the hump of change. If only there were a pill for that.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships. This transcript has been edited for clarity.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>167931</fileName> <TBEID>0C04FE97.SIG</TBEID> <TBUniqueIdentifier>MD_0C04FE97</TBUniqueIdentifier> <newsOrJournal>News</newsOrJournal> <publisherName>Frontline Medical Communications</publisherName> <storyname/> <articleType>353</articleType> <TBLocation>QC Done-All Pubs</TBLocation> <QCDate>20240509T132215</QCDate> <firstPublished>20240509T133124</firstPublished> <LastPublished>20240509T133124</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20240509T133124</CMSDate> <articleSource/> <facebookInfo/> <meetingNumber/> <byline>F Perry Wilson</byline> <bylineText>F. PERRY WILSON, MD, MSCE</bylineText> <bylineFull>F. PERRY WILSON, MD, MSCE</bylineFull> <bylineTitleText/> <USOrGlobal/> <wireDocType/> <newsDocType>Opinion</newsDocType> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:imng"> <name>IMNG Medical Media</name> <rightsInfo> <copyrightHolder> <name>Frontline Medical News</name> </copyrightHolder> <copyrightNotice>Copyright (c) 2015 Frontline Medical News, a Frontline Medical Communications Inc. company. All rights reserved. This material may not be published, broadcast, copied, or otherwise reproduced or distributed without the prior written permission of Frontline Medical Communications Inc.</copyrightNotice> </rightsInfo> </provider> <abstract/> <metaDescription>any lifestyle change is hard, but with persistence the changes become habits and, eventually, those habits do become pleasurable.</metaDescription> <articlePDF/> <teaserImage>289714</teaserImage> <teaser>Any lifestyle change is hard, but with persistence the changes become habits and, eventually, those habits do become pleasurable.</teaser> <title>Intermittent Fasting + HIIT: Fitness Fad or Fix?</title> <deck/> <disclaimer/> <AuthorList/> <articleURL/> <doi/> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear/> <pubPubdateMonth/> <pubPubdateDay/> <pubVolume/> <pubNumber/> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>fp</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>im</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> </publications_g> <publications> <term canonical="true">15</term> <term>21</term> </publications> <sections> <term canonical="true">52</term> <term>41022</term> </sections> <topics> <term canonical="true">261</term> <term>322</term> </topics> <links> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/2401133b.jpg</altRep> <description role="drol:caption">Dr. F. Perry Wilson</description> <description role="drol:credit">Yale University</description> </link> </links> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>Intermittent Fasting + HIIT: Fitness Fad or Fix?</title> <deck/> </itemMeta> <itemContent> <p>Let’s be honest: Although as physicians we have the power of the prescription pad, so much of health, in the end, comes down to lifestyle. Of course, taking a pill is often way easier than changing your longstanding habits. And what’s worse, doesn’t it always seem like the lifestyle stuff that is good for your health is unpleasant?</p> <p>Two recent lifestyle interventions that I have tried and find really not enjoyable are time-restricted eating (also known as intermittent fasting) and high-intensity interval training, or HIIT. The former leaves me hangry for half the day; the latter is, well, it’s just really hard compared with my usual jog.<br/><br/>[[{"fid":"289714","view_mode":"medstat_image_flush_right","fields":{"format":"medstat_image_flush_right","field_file_image_alt_text[und][0][value]":"Dr. F. Perry Wilson, associate professor, department of medicine; interim director, Program of Applied Translational Research, Yale University, New Haven, Conn.","field_file_image_credit[und][0][value]":"Yale University","field_file_image_caption[und][0][value]":"Dr. F. Perry Wilson"},"type":"media","attributes":{"class":"media-element file-medstat_image_flush_right"}}]]But given the rule of unpleasant lifestyle changes, I knew as soon as I saw this recent study what the result would be. What if we combined time-restricted eating with high-intensity interval training?<br/><br/>I’m referring to <span class="Hyperlink"><a href="https://doi.org/10.1371/journal.pone.0301369">this study</a></span>, appearing in <em>PLOS ONE</em> from Ranya Ameur and colleagues, which is a small trial that enrolled otherwise healthy women with a BMI &gt; 30 and randomized them to one of three conditions.<br/><br/>First was time-restricted eating. Women in this group could eat whatever they wanted, but only from 8 a.m. to 4 p.m. daily.<br/><br/>Second: high-intensity functional training. This is a variant of high-intensity interval training which focuses a bit more on resistance exercise than on pure cardiovascular stuff but has the same vibe of doing brief bursts of intensive activity followed by a cool-down period.<br/><br/>Third: a combination of the two. Sounds rough to me.<br/><br/>The study was otherwise straightforward. At baseline, researchers collected data on body composition and dietary intake, and measured blood pressure, glucose, <span class="Hyperlink">insulin</span>, and lipid biomarkers. A 12-week intervention period followed, after which all of this stuff was measured again.<br/><br/>Now, you may have noticed that there is no control group in this study. We’ll come back to that — a few times.<br/><br/>Let me walk you through some of the outcomes here.<br/><br/>First off, body composition metrics. All three groups lost weight — on average, around 10% of body weight which, for a 12-week intervention, is fairly impressive. BMI and waist circumference went down as well, and, interestingly, much of the weight loss here was in fat mass, not fat-free mass.<br/><br/>Most interventions that lead to weight loss — and I’m including some of the newer drugs here — lead to both fat and muscle loss. That might not be as bad as it sounds; the truth is that muscle mass increases as fat increases because of the simple fact that if you’re carrying more weight when you walk around, your leg muscles get bigger. But to preserve muscle mass in the face of fat loss is sort of a Goldilocks finding, and, based on these results, there’s a suggestion that the high-intensity functional training helps to do just that.<br/><br/>The dietary intake findings were really surprising to me. Across the board, caloric intake decreased. It’s no surprise that time-restricted eating reduces calorie intake. <span class="Hyperlink"><a href="https://www.acpjournals.org/doi/10.7326/M23-0052">That has been shown</a></span> many times before and is probably the main reason it induces weight loss — less time to eat means you eat less.<br/><br/>But why would high-intensity functional training lead to lower caloric intake? Most people, myself included, get hungry after they exercise. In fact, one of the reasons it’s hard to lose weight with exercise alone is that we end up eating more calories to make up for what we lost during the exercise. This calorie reduction could be a unique effect of this type of exercise, but honestly this could also be something called the <span class="Hyperlink"><a href="https://en.wikipedia.org/wiki/Hawthorne_effect">Hawthorne effect</a></span>. Women in the study kept a food diary to track their intake, and the act of doing that might actually make you eat less. It makes it a little more annoying to snack a bit if you know you have to write it down. This is a situation where I would kill for a control group.<br/><br/>The lipid findings are also pretty striking, with around a 40% reduction in <span class="Hyperlink">LDL</span> across the board, and evidence of synergistic effects of combined TRE and high-intensity training on total cholesterol and <span class="Hyperlink">triglycerides</span>. This is like a statin level of effect — pretty impressive. Again, my heart pines for a control group, though.<br/><br/>Same story with glucose and insulin measures: an impressive reduction in fasting glucose and good evidence that the combination of time-restricted eating and high-intensity functional training reduces insulin levels and HOMA-IR as well.<br/><br/>Really the only thing that wasn’t very impressive was the change in blood pressure, with only modest decreases across the board.<br/><br/>Okay, so let’s take a breath after this high-intensity cerebral workout and put this all together. This was a small study, lacking a control group, but with large effect sizes in very relevant clinical areas. It confirms what we know about time-restricted eating — that it makes you eat less calories — and introduces the potential that vigorous exercise can not only magnify the benefits of time-restricted eating but maybe even mitigate some of the risks, like the risk for muscle loss. And of course, it comports with my central hypothesis, which is that the more unpleasant a lifestyle intervention is, the better it is for you. No pain, no gain, right?<br/><br/>Of course, I am being overly dogmatic. There are plenty of caveats. Wrestling bears is quite unpleasant and almost certainly bad for you. And there are even some pleasant things that are pretty good for you — like coffee and sex. And there are even people who find time-restricted eating and high-intensity training pleasurable. They are called masochists.<br/><br/>I’m joking. The truth is that<span class="tag metaDescription"> any lifestyle change is hard, but with persistence the changes become habits and, eventually, those habits do become pleasurable.</span> Or, at least, much less painful. The trick is getting over the hump of change. If only there were a pill for that.<br/><br/><br/><br/></p> <p> <em>Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships. <em>This transcript has been edited for clarity</em>.</em> </p> <p> <em>A version of this article appeared on <span class="Hyperlink"><a href="https://www.medscape.com/viewarticle/1000751">Medscape.com</a></span>.</em> </p> </itemContent> </newsItem> <newsItem> <itemMeta> <itemRole>teaser</itemRole> <itemClass>text</itemClass> <title/> <deck/> </itemMeta> <itemContent> </itemContent> </newsItem> </itemSet></root>
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Are Women Better Doctors Than Men?

Article Type
Changed
Wed, 04/24/2024 - 11:41

 



This transcript has been edited for clarity.

It’s a battle of the sexes today as we dive into a paper that makes you say, “Wow, what an interesting study” and also “Boy, am I glad I didn’t do that study.” That’s because studies like this are always somewhat fraught; they say something about medicine but also something about society — and that makes this a bit precarious. But that’s never stopped us before. So, let’s go ahead and try to answer the question: Do women make better doctors than men?

On the surface, this question seems nearly impossible to answer. It’s too broad; what does it mean to be a “better” doctor? At first blush it seems that there are just too many variables to control for here: the type of doctor, the type of patient, the clinical scenario, and so on.

But this study, “Comparison of hospital mortality and readmission rates by physician and patient sex,” which appears in Annals of Internal Medicine, uses a fairly ingenious method to cut through all the bias by leveraging two simple facts: First, hospital medicine is largely conducted by hospitalists these days; second, due to the shift-based nature of hospitalist work, the hospitalist you get when you are admitted to the hospital is pretty much random.

In other words, if you are admitted to the hospital for an acute illness and get a hospitalist as your attending, you have no control over whether it is a man or a woman. Is this a randomized trial? No, but it’s not bad.

Researchers used Medicare claims data to identify adults over age 65 who had nonelective hospital admissions throughout the United States. The claims revealed the sex of the patient and the name of the attending physician. By linking to a medical provider database, they could determine the sex of the provider.

The goal was to look at outcomes across four dyads:

  • Male patient – male doctor
  • Male patient – female doctor
  • Female patient – male doctor
  • Female patient – female doctor

The primary outcome was 30-day mortality.

I told you that focusing on hospitalists produces some pseudorandomization, but let’s look at the data to be sure. Just under a million patients were treated by approximately 50,000 physicians, 30% of whom were female. And, though female patients and male patients differed, they did not differ with respect to the sex of their hospitalist. So, by physician sex, patients were similar in mean age, race, ethnicity, household income, eligibility for Medicaid, and comorbid conditions. The authors even created a “predicted mortality” score which was similar across the groups as well.

167829_photo1_web.jpg


Now, the female physicians were a bit different from the male physicians. The female hospitalists were slightly more likely to have an osteopathic degree, had slightly fewer admissions per year, and were a bit younger.

So, we have broadly similar patients regardless of who their hospitalist was, but hospitalists differ by factors other than their sex. Fine.

I’ve graphed the results here. Female patients had a significantly lower 30-day mortality rate than male patients, but they fared even better when cared for by female doctors compared with male doctors. There wasn’t a particularly strong influence of physician sex on outcomes for male patients. The secondary outcome, 30-day hospital readmission, showed a similar trend.

167829_photo2_web.jpg


This is a relatively small effect, to be sure, but if you multiply it across the millions of hospitalist admissions per year, you can start to put up some real numbers.

So, what is going on here? I see four broad buckets of possibilities.

Let’s start with the obvious explanation: Women, on average, are better doctors than men. I am married to a woman doctor, and based on my personal experience, this explanation is undoubtedly true. But why would that be?

The authors cite data that suggest that female physicians are less likely than male physicians to dismiss patient concerns — and in particular, the concerns of female patients — perhaps leading to fewer missed diagnoses. But this is impossible to measure with administrative data, so this study can no more tell us whether these female hospitalists are more attentive than their male counterparts than it can suggest that the benefit is mediated by the shorter average height of female physicians. Perhaps the key is being closer to the patient?

The second possibility here is that this has nothing to do with the sex of the physician at all; it has to do with those other things that associate with the sex of the physician. We know, for example, that the female physicians saw fewer patients per year than the male physicians, but the study authors adjusted for this in the statistical models. Still, other unmeasured factors (confounders) could be present. By the way, confounders wouldn’t necessarily change the primary finding — you are better off being cared for by female physicians. It’s just not because they are female; it’s a convenient marker for some other quality, such as age.

The third possibility is that the study represents a phenomenon called collider bias. The idea here is that physicians only get into the study if they are hospitalists, and the quality of physicians who choose to become a hospitalist may differ by sex. When deciding on a specialty, a talented resident considering certain lifestyle issues may find hospital medicine particularly attractive — and that draw toward a more lifestyle-friendly specialty may differ by sex, as some prior studies have shown. If true, the pool of women hospitalists may be better than their male counterparts because male physicians of that caliber don’t become hospitalists.

Okay, don’t write in. I’m just trying to cite examples of how to think about collider bias. I can’t prove that this is the case, and in fact the authors do a sensitivity analysis of all physicians, not just hospitalists, and show the same thing. So this is probably not true, but epidemiology is fun, right?

And the fourth possibility: This is nothing but statistical noise. The effect size is incredibly small and just on the border of statistical significance. Especially when you’re working with very large datasets like this, you’ve got to be really careful about overinterpreting statistically significant findings that are nevertheless of small magnitude.

Regardless, it’s an interesting study, one that made me think and, of course, worry a bit about how I would present it. Forgive me if I’ve been indelicate in handling the complex issues of sex, gender, and society here. But I’m not sure what you expect; after all, I’m only a male doctor.

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 



This transcript has been edited for clarity.

It’s a battle of the sexes today as we dive into a paper that makes you say, “Wow, what an interesting study” and also “Boy, am I glad I didn’t do that study.” That’s because studies like this are always somewhat fraught; they say something about medicine but also something about society — and that makes this a bit precarious. But that’s never stopped us before. So, let’s go ahead and try to answer the question: Do women make better doctors than men?

On the surface, this question seems nearly impossible to answer. It’s too broad; what does it mean to be a “better” doctor? At first blush it seems that there are just too many variables to control for here: the type of doctor, the type of patient, the clinical scenario, and so on.

But this study, “Comparison of hospital mortality and readmission rates by physician and patient sex,” which appears in Annals of Internal Medicine, uses a fairly ingenious method to cut through all the bias by leveraging two simple facts: First, hospital medicine is largely conducted by hospitalists these days; second, due to the shift-based nature of hospitalist work, the hospitalist you get when you are admitted to the hospital is pretty much random.

In other words, if you are admitted to the hospital for an acute illness and get a hospitalist as your attending, you have no control over whether it is a man or a woman. Is this a randomized trial? No, but it’s not bad.

Researchers used Medicare claims data to identify adults over age 65 who had nonelective hospital admissions throughout the United States. The claims revealed the sex of the patient and the name of the attending physician. By linking to a medical provider database, they could determine the sex of the provider.

The goal was to look at outcomes across four dyads:

  • Male patient – male doctor
  • Male patient – female doctor
  • Female patient – male doctor
  • Female patient – female doctor

The primary outcome was 30-day mortality.

I told you that focusing on hospitalists produces some pseudorandomization, but let’s look at the data to be sure. Just under a million patients were treated by approximately 50,000 physicians, 30% of whom were female. And, though female patients and male patients differed, they did not differ with respect to the sex of their hospitalist. So, by physician sex, patients were similar in mean age, race, ethnicity, household income, eligibility for Medicaid, and comorbid conditions. The authors even created a “predicted mortality” score which was similar across the groups as well.

167829_photo1_web.jpg


Now, the female physicians were a bit different from the male physicians. The female hospitalists were slightly more likely to have an osteopathic degree, had slightly fewer admissions per year, and were a bit younger.

So, we have broadly similar patients regardless of who their hospitalist was, but hospitalists differ by factors other than their sex. Fine.

I’ve graphed the results here. Female patients had a significantly lower 30-day mortality rate than male patients, but they fared even better when cared for by female doctors compared with male doctors. There wasn’t a particularly strong influence of physician sex on outcomes for male patients. The secondary outcome, 30-day hospital readmission, showed a similar trend.

167829_photo2_web.jpg


This is a relatively small effect, to be sure, but if you multiply it across the millions of hospitalist admissions per year, you can start to put up some real numbers.

So, what is going on here? I see four broad buckets of possibilities.

Let’s start with the obvious explanation: Women, on average, are better doctors than men. I am married to a woman doctor, and based on my personal experience, this explanation is undoubtedly true. But why would that be?

The authors cite data that suggest that female physicians are less likely than male physicians to dismiss patient concerns — and in particular, the concerns of female patients — perhaps leading to fewer missed diagnoses. But this is impossible to measure with administrative data, so this study can no more tell us whether these female hospitalists are more attentive than their male counterparts than it can suggest that the benefit is mediated by the shorter average height of female physicians. Perhaps the key is being closer to the patient?

The second possibility here is that this has nothing to do with the sex of the physician at all; it has to do with those other things that associate with the sex of the physician. We know, for example, that the female physicians saw fewer patients per year than the male physicians, but the study authors adjusted for this in the statistical models. Still, other unmeasured factors (confounders) could be present. By the way, confounders wouldn’t necessarily change the primary finding — you are better off being cared for by female physicians. It’s just not because they are female; it’s a convenient marker for some other quality, such as age.

The third possibility is that the study represents a phenomenon called collider bias. The idea here is that physicians only get into the study if they are hospitalists, and the quality of physicians who choose to become a hospitalist may differ by sex. When deciding on a specialty, a talented resident considering certain lifestyle issues may find hospital medicine particularly attractive — and that draw toward a more lifestyle-friendly specialty may differ by sex, as some prior studies have shown. If true, the pool of women hospitalists may be better than their male counterparts because male physicians of that caliber don’t become hospitalists.

Okay, don’t write in. I’m just trying to cite examples of how to think about collider bias. I can’t prove that this is the case, and in fact the authors do a sensitivity analysis of all physicians, not just hospitalists, and show the same thing. So this is probably not true, but epidemiology is fun, right?

And the fourth possibility: This is nothing but statistical noise. The effect size is incredibly small and just on the border of statistical significance. Especially when you’re working with very large datasets like this, you’ve got to be really careful about overinterpreting statistically significant findings that are nevertheless of small magnitude.

Regardless, it’s an interesting study, one that made me think and, of course, worry a bit about how I would present it. Forgive me if I’ve been indelicate in handling the complex issues of sex, gender, and society here. But I’m not sure what you expect; after all, I’m only a male doctor.

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

 



This transcript has been edited for clarity.

It’s a battle of the sexes today as we dive into a paper that makes you say, “Wow, what an interesting study” and also “Boy, am I glad I didn’t do that study.” That’s because studies like this are always somewhat fraught; they say something about medicine but also something about society — and that makes this a bit precarious. But that’s never stopped us before. So, let’s go ahead and try to answer the question: Do women make better doctors than men?

On the surface, this question seems nearly impossible to answer. It’s too broad; what does it mean to be a “better” doctor? At first blush it seems that there are just too many variables to control for here: the type of doctor, the type of patient, the clinical scenario, and so on.

But this study, “Comparison of hospital mortality and readmission rates by physician and patient sex,” which appears in Annals of Internal Medicine, uses a fairly ingenious method to cut through all the bias by leveraging two simple facts: First, hospital medicine is largely conducted by hospitalists these days; second, due to the shift-based nature of hospitalist work, the hospitalist you get when you are admitted to the hospital is pretty much random.

In other words, if you are admitted to the hospital for an acute illness and get a hospitalist as your attending, you have no control over whether it is a man or a woman. Is this a randomized trial? No, but it’s not bad.

Researchers used Medicare claims data to identify adults over age 65 who had nonelective hospital admissions throughout the United States. The claims revealed the sex of the patient and the name of the attending physician. By linking to a medical provider database, they could determine the sex of the provider.

The goal was to look at outcomes across four dyads:

  • Male patient – male doctor
  • Male patient – female doctor
  • Female patient – male doctor
  • Female patient – female doctor

The primary outcome was 30-day mortality.

I told you that focusing on hospitalists produces some pseudorandomization, but let’s look at the data to be sure. Just under a million patients were treated by approximately 50,000 physicians, 30% of whom were female. And, though female patients and male patients differed, they did not differ with respect to the sex of their hospitalist. So, by physician sex, patients were similar in mean age, race, ethnicity, household income, eligibility for Medicaid, and comorbid conditions. The authors even created a “predicted mortality” score which was similar across the groups as well.

167829_photo1_web.jpg


Now, the female physicians were a bit different from the male physicians. The female hospitalists were slightly more likely to have an osteopathic degree, had slightly fewer admissions per year, and were a bit younger.

So, we have broadly similar patients regardless of who their hospitalist was, but hospitalists differ by factors other than their sex. Fine.

I’ve graphed the results here. Female patients had a significantly lower 30-day mortality rate than male patients, but they fared even better when cared for by female doctors compared with male doctors. There wasn’t a particularly strong influence of physician sex on outcomes for male patients. The secondary outcome, 30-day hospital readmission, showed a similar trend.

167829_photo2_web.jpg


This is a relatively small effect, to be sure, but if you multiply it across the millions of hospitalist admissions per year, you can start to put up some real numbers.

So, what is going on here? I see four broad buckets of possibilities.

Let’s start with the obvious explanation: Women, on average, are better doctors than men. I am married to a woman doctor, and based on my personal experience, this explanation is undoubtedly true. But why would that be?

The authors cite data that suggest that female physicians are less likely than male physicians to dismiss patient concerns — and in particular, the concerns of female patients — perhaps leading to fewer missed diagnoses. But this is impossible to measure with administrative data, so this study can no more tell us whether these female hospitalists are more attentive than their male counterparts than it can suggest that the benefit is mediated by the shorter average height of female physicians. Perhaps the key is being closer to the patient?

The second possibility here is that this has nothing to do with the sex of the physician at all; it has to do with those other things that associate with the sex of the physician. We know, for example, that the female physicians saw fewer patients per year than the male physicians, but the study authors adjusted for this in the statistical models. Still, other unmeasured factors (confounders) could be present. By the way, confounders wouldn’t necessarily change the primary finding — you are better off being cared for by female physicians. It’s just not because they are female; it’s a convenient marker for some other quality, such as age.

The third possibility is that the study represents a phenomenon called collider bias. The idea here is that physicians only get into the study if they are hospitalists, and the quality of physicians who choose to become a hospitalist may differ by sex. When deciding on a specialty, a talented resident considering certain lifestyle issues may find hospital medicine particularly attractive — and that draw toward a more lifestyle-friendly specialty may differ by sex, as some prior studies have shown. If true, the pool of women hospitalists may be better than their male counterparts because male physicians of that caliber don’t become hospitalists.

Okay, don’t write in. I’m just trying to cite examples of how to think about collider bias. I can’t prove that this is the case, and in fact the authors do a sensitivity analysis of all physicians, not just hospitalists, and show the same thing. So this is probably not true, but epidemiology is fun, right?

And the fourth possibility: This is nothing but statistical noise. The effect size is incredibly small and just on the border of statistical significance. Especially when you’re working with very large datasets like this, you’ve got to be really careful about overinterpreting statistically significant findings that are nevertheless of small magnitude.

Regardless, it’s an interesting study, one that made me think and, of course, worry a bit about how I would present it. Forgive me if I’ve been indelicate in handling the complex issues of sex, gender, and society here. But I’m not sure what you expect; after all, I’m only a male doctor.

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>167829</fileName> <TBEID>0C04FC71.SIG</TBEID> <TBUniqueIdentifier>MD_0C04FC71</TBUniqueIdentifier> <newsOrJournal>News</newsOrJournal> <publisherName>Frontline Medical Communications</publisherName> <storyname/> <articleType>2</articleType> <TBLocation>QC Done-All Pubs</TBLocation> <QCDate>20240424T111623</QCDate> <firstPublished>20240424T113542</firstPublished> <LastPublished>20240424T113542</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20240424T113541</CMSDate> <articleSource/> <facebookInfo/> <meetingNumber/> <byline>F. Perry Wilson, MD</byline> <bylineText>F. PERRY WILSON, MD, MSCE</bylineText> <bylineFull>F. PERRY WILSON, MD, MSCE</bylineFull> <bylineTitleText/> <USOrGlobal/> <wireDocType/> <newsDocType>News</newsDocType> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:imng"> <name>IMNG Medical Media</name> <rightsInfo> <copyrightHolder> <name>Frontline Medical News</name> </copyrightHolder> <copyrightNotice>Copyright (c) 2015 Frontline Medical News, a Frontline Medical Communications Inc. company. All rights reserved. This material may not be published, broadcast, copied, or otherwise reproduced or distributed without the prior written permission of Frontline Medical Communications Inc.</copyrightNotice> </rightsInfo> </provider> <abstract/> <metaDescription>Female patients had a significantly lower 30-day mortality rate than male patients, but they fared even better when cared for by female doctors compared with ma</metaDescription> <articlePDF/> <teaserImage>301165</teaserImage> <teaser>Study finds female hospitalists provided better care, defined as lower 30-day mortality, than male hospitalists.</teaser> <title>Are Women Better Doctors Than Men?</title> <deck/> <disclaimer/> <AuthorList/> <articleURL/> <doi/> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear/> <pubPubdateMonth/> <pubPubdateDay/> <pubVolume/> <pubNumber/> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>im</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>fp</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>card</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>chph</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>endo</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>cpn</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>skin</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>hemn</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>idprac</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>mdsurg</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> <journalTitle/> <journalFullTitle/> <copyrightStatement>2018 Frontline Medical Communications Inc.,</copyrightStatement> </publicationData> <publicationData> <publicationCode>oncr</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>nr</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> <journalTitle>Neurology Reviews</journalTitle> <journalFullTitle>Neurology Reviews</journalFullTitle> <copyrightStatement>2018 Frontline Medical Communications Inc.,</copyrightStatement> </publicationData> <publicationData> <publicationCode>pn</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>ob</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>mdemed</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> <journalTitle/> <journalFullTitle/> <copyrightStatement/> </publicationData> <publicationData> <publicationCode>rn</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> </publications_g> <publications> <term canonical="true">21</term> <term>15</term> <term>5</term> <term>6</term> <term>34</term> <term>9</term> <term>13</term> <term>18</term> <term>20</term> <term>52226</term> <term>31</term> <term>22</term> <term>25</term> <term>23</term> <term>58877</term> <term>26</term> </publications> <sections> <term canonical="true">39313</term> </sections> <topics> <term canonical="true">38029</term> <term>278</term> </topics> <links> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012878.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Dr. Wilson</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012879.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Dr. Wilson</description> </link> </links> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>Are Women Better Doctors Than Men?</title> <deck/> </itemMeta> <itemContent> <p><br/><br/><em>This transcript has been edited for clarity</em>.</p> <p>It’s a battle of the sexes today as we dive into a paper that makes you say, “Wow, what an interesting study” and also “Boy, am I glad I didn’t do that study.” That’s because studies like this are always somewhat fraught; they say something about medicine but also something about society — and that makes this a bit precarious. But that’s never stopped us before. So, let’s go ahead and try to answer the question: Do women make better doctors than men?</p> <p>On the surface, this question seems nearly impossible to answer. It’s too broad; what does it mean to be a “better” doctor? At first blush it seems that there are just too many variables to control for here: the type of doctor, the type of patient, the clinical scenario, and so on.<br/><br/>But this <span class="Hyperlink"><a href="https://www.acpjournals.org/doi/10.7326/M23-3163">study</a></span>, “Comparison of hospital mortality and readmission rates by physician and patient sex,” which appears in <em>Annals of Internal Medicine</em>, uses a fairly ingenious method to cut through all the bias by leveraging two simple facts: First, hospital medicine is largely conducted by hospitalists these days; second, due to the shift-based nature of hospitalist work, the hospitalist you get when you are admitted to the hospital is pretty much random.<br/><br/>In other words, if you are admitted to the hospital for an acute illness and get a hospitalist as your attending, you have no control over whether it is a man or a woman. Is this a randomized trial? No, but it’s not bad.<br/><br/>Researchers used Medicare claims data to identify adults over age 65 who had nonelective hospital admissions throughout the United States. The claims revealed the sex of the patient and the name of the attending physician. By linking to a medical provider database, they could determine the sex of the provider.<br/><br/>The goal was to look at outcomes across four dyads:</p> <ul class="body"> <li>Male patient – male doctor</li> <li>Male patient – female doctor</li> <li>Female patient – male doctor</li> <li>Female patient – female doctor</li> </ul> <p>The primary outcome was 30-day mortality.<br/><br/>I told you that focusing on hospitalists produces some pseudorandomization, but let’s look at the data to be sure. Just under a million patients were treated by approximately 50,000 physicians, 30% of whom were female. And, though female patients and male patients differed, they did not differ with respect to the sex of their hospitalist. So, by physician sex, patients were similar in mean age, race, ethnicity, household income, eligibility for Medicaid, and comorbid conditions. The authors even created a “predicted mortality” score which was similar across the groups as well.<br/><br/>[[{"fid":"301165","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"Baseline characteristics","field_file_image_credit[und][0][value]":"Dr. Wilson","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>Now, the female physicians were a bit different from the male physicians. The female hospitalists were slightly more likely to have an osteopathic degree, had slightly fewer admissions per year, and were a bit younger.<br/><br/>So, we have broadly similar patients regardless of who their hospitalist was, but hospitalists differ by factors other than their sex. Fine.<br/><br/>I’ve graphed the results here. <span class="tag metaDescription">Female patients had a significantly lower 30-day mortality rate than male patients, but they fared even better when cared for by female doctors compared with male doctors. There wasn’t a particularly strong influence of physician sex on outcomes for male patients. The secondary outcome, 30-day hospital readmission, showed a similar trend.</span><br/><br/>[[{"fid":"301166","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"Outcomes","field_file_image_credit[und][0][value]":"Dr. Wilson","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>This is a relatively small effect, to be sure, but if you multiply it across the millions of hospitalist admissions per year, you can start to put up some real numbers.<br/><br/>So, what is going on here? I see four broad buckets of possibilities.<br/><br/>Let’s start with the obvious explanation: Women, on average, are better doctors than men. I am married to a woman doctor, and based on my personal experience, this explanation is undoubtedly true. But why would that be?<br/><br/>The authors cite <span class="Hyperlink"><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3690315/">data that suggest</a></span> that female physicians are less likely than male physicians to dismiss patient concerns — and in particular, the concerns of female patients — perhaps leading to fewer missed diagnoses. But this is impossible to measure with administrative data, so this study can no more tell us whether these female hospitalists are more attentive than their male counterparts than it can suggest that the benefit is mediated by the shorter average height of female physicians. Perhaps the key is being closer to the patient?<br/><br/>The second possibility here is that this has nothing to do with the sex of the physician at all; it has to do with those other things that associate with the sex of the physician. We know, for example, that the female physicians saw fewer patients per year than the male physicians, but the study authors adjusted for this in the statistical models. Still, other unmeasured factors (confounders) could be present. By the way, confounders wouldn’t necessarily change the primary finding — you are better off being cared for by female physicians. It’s just not because they are female; it’s a convenient marker for some other quality, such as age.<br/><br/>The third possibility is that the study represents a phenomenon called collider bias. The idea here is that physicians only get into the study if they are hospitalists, and the quality of physicians who choose to become a hospitalist may differ by sex. When deciding on a specialty, a talented resident considering certain lifestyle issues may find hospital medicine particularly attractive — and that draw toward a more lifestyle-friendly specialty may differ by sex, as <span class="Hyperlink"><a href="https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/647734">some prior studies have shown</a></span>. If true, the pool of women hospitalists may be better than their male counterparts because male physicians of that caliber don’t become hospitalists.<br/><br/>Okay, don’t write in. I’m just trying to cite examples of how to think about collider bias. I can’t prove that this is the case, and in fact the authors do a sensitivity analysis of all physicians, not just hospitalists, and show the same thing. So this is probably not true, but epidemiology is fun, right?<br/><br/>And the fourth possibility: This is nothing but statistical noise. The effect size is incredibly small and just on the border of statistical significance. Especially when you’re working with very large datasets like this, you’ve got to be really careful about overinterpreting statistically significant findings that are nevertheless of small magnitude.<br/><br/>Regardless, it’s an interesting study, one that made me think and, of course, worry a bit about how I would present it. Forgive me if I’ve been indelicate in handling the complex issues of sex, gender, and society here. But I’m not sure what you expect; after all, I’m only a male doctor.<span class="end"/></p> <p> <em>Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.</em> </p> <p> <em>A version of this article appeared on <span class="Hyperlink"><a href="https://www.medscape.com/viewarticle/1000715">Medscape.com</a></span>.</em> </p> </itemContent> </newsItem> <newsItem> <itemMeta> <itemRole>teaser</itemRole> <itemClass>text</itemClass> <title/> <deck/> </itemMeta> <itemContent> </itemContent> </newsItem> </itemSet></root>
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

‘Difficult Patient’: Stigmatizing Words and Medical Error

Article Type
Changed
Thu, 04/25/2024 - 12:14

This transcript has been edited for clarity.

When I was doing my nephrology training, I had an attending who would write notes that were, well, kind of funny. I remember one time we were seeing a patient whose first name was “Lucky.” He dryly opened his section of the consult note as follows: “This is a 56-year-old woman with an ironic name who presents with acute renal failure.”

As an exhausted renal fellow, I appreciated the bit of color amid the ongoing series of tragedies that was the consult service. But let’s be clear — writing like this in the medical record is not a good idea. It wasn’t a good idea then, when any record might end up disclosed during a malpractice suit, and it’s really not a good idea now, when patients have ready and automated access to all the notes we write about them.

And yet, worse language than that of my attending appears in hospital notes all the time; there is research about this. Specifically, I’m talking about language that does not have high clinical utility but telegraphs the biases of the person writing the note. This is known as “stigmatizing language” and it can be overt or subtle.

For example, a physician wrote “I listed several fictitious medication names and she reported she was taking them.”

This casts suspicions about the patient’s credibility, as does the more subtle statement, “he claims nicotine patches don’t work for him.” Stigmatizing language may cast the patient in a difficult light, like this note: “she persevered on the fact that ... ‘you wouldn’t understand.’ ”

This stuff creeps into our medical notes because doctors are human, not AI — at least not yet — and our frustrations and biases are real. But could those frustrations and biases lead to medical errors? Even deaths? Stay with me.

We are going to start by defining a very sick patient population: those admitted to the hospital and who, within 48 hours, have either been transferred to the intensive care unit or died. Because of the severity of illness in this population we’ve just defined, figuring out whether a diagnostic or other error was made would be extremely high yield; these can mean the difference between life and death.

In a letter appearing in JAMA Internal Medicine, researchers examined a group of more than 2300 patients just like this from 29 hospitals, scouring the medical records for evidence of these types of errors.

Nearly one in four (23.2%) had at least one diagnostic error, which could include a missed physical exam finding, failure to ask a key question on history taking, inadequate testing, and so on.

Understanding why we make these errors is clearly critical to improving care for these patients. The researchers hypothesized that stigmatizing language might lead to errors like this. For example, by demonstrating that you don’t find a patient credible, you may ignore statements that would help make a better diagnosis.

Just over 5% of these patients had evidence of stigmatizing language in their medical notes. Like earlier studies, this language was more common if the patient was Black or had unstable housing.

Critically, stigmatizing language was more likely to be found among those who had diagnostic errors — a rate of 8.2% vs 4.1%. After adjustment for factors like race, the presence of stigmatizing language was associated with roughly a doubling of the risk for diagnostic errors.

Now, I’m all for eliminating stigmatizing language from our medical notes. And, given the increased transparency of all medical notes these days, I expect that we’ll see less of this over time. But of course, the fact that a physician doesn’t write something that disparages the patient does not necessarily mean that they don’t retain that bias. That said, those comments have an effect on all the other team members who care for that patient as well; it sets a tone and can entrench an individual’s bias more broadly. We should strive to eliminate our biases when it comes to caring for patients. But perhaps the second best thing is to work to keep those biases to ourselves.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity.

When I was doing my nephrology training, I had an attending who would write notes that were, well, kind of funny. I remember one time we were seeing a patient whose first name was “Lucky.” He dryly opened his section of the consult note as follows: “This is a 56-year-old woman with an ironic name who presents with acute renal failure.”

As an exhausted renal fellow, I appreciated the bit of color amid the ongoing series of tragedies that was the consult service. But let’s be clear — writing like this in the medical record is not a good idea. It wasn’t a good idea then, when any record might end up disclosed during a malpractice suit, and it’s really not a good idea now, when patients have ready and automated access to all the notes we write about them.

And yet, worse language than that of my attending appears in hospital notes all the time; there is research about this. Specifically, I’m talking about language that does not have high clinical utility but telegraphs the biases of the person writing the note. This is known as “stigmatizing language” and it can be overt or subtle.

For example, a physician wrote “I listed several fictitious medication names and she reported she was taking them.”

This casts suspicions about the patient’s credibility, as does the more subtle statement, “he claims nicotine patches don’t work for him.” Stigmatizing language may cast the patient in a difficult light, like this note: “she persevered on the fact that ... ‘you wouldn’t understand.’ ”

This stuff creeps into our medical notes because doctors are human, not AI — at least not yet — and our frustrations and biases are real. But could those frustrations and biases lead to medical errors? Even deaths? Stay with me.

We are going to start by defining a very sick patient population: those admitted to the hospital and who, within 48 hours, have either been transferred to the intensive care unit or died. Because of the severity of illness in this population we’ve just defined, figuring out whether a diagnostic or other error was made would be extremely high yield; these can mean the difference between life and death.

In a letter appearing in JAMA Internal Medicine, researchers examined a group of more than 2300 patients just like this from 29 hospitals, scouring the medical records for evidence of these types of errors.

Nearly one in four (23.2%) had at least one diagnostic error, which could include a missed physical exam finding, failure to ask a key question on history taking, inadequate testing, and so on.

Understanding why we make these errors is clearly critical to improving care for these patients. The researchers hypothesized that stigmatizing language might lead to errors like this. For example, by demonstrating that you don’t find a patient credible, you may ignore statements that would help make a better diagnosis.

Just over 5% of these patients had evidence of stigmatizing language in their medical notes. Like earlier studies, this language was more common if the patient was Black or had unstable housing.

Critically, stigmatizing language was more likely to be found among those who had diagnostic errors — a rate of 8.2% vs 4.1%. After adjustment for factors like race, the presence of stigmatizing language was associated with roughly a doubling of the risk for diagnostic errors.

Now, I’m all for eliminating stigmatizing language from our medical notes. And, given the increased transparency of all medical notes these days, I expect that we’ll see less of this over time. But of course, the fact that a physician doesn’t write something that disparages the patient does not necessarily mean that they don’t retain that bias. That said, those comments have an effect on all the other team members who care for that patient as well; it sets a tone and can entrench an individual’s bias more broadly. We should strive to eliminate our biases when it comes to caring for patients. But perhaps the second best thing is to work to keep those biases to ourselves.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity.

When I was doing my nephrology training, I had an attending who would write notes that were, well, kind of funny. I remember one time we were seeing a patient whose first name was “Lucky.” He dryly opened his section of the consult note as follows: “This is a 56-year-old woman with an ironic name who presents with acute renal failure.”

As an exhausted renal fellow, I appreciated the bit of color amid the ongoing series of tragedies that was the consult service. But let’s be clear — writing like this in the medical record is not a good idea. It wasn’t a good idea then, when any record might end up disclosed during a malpractice suit, and it’s really not a good idea now, when patients have ready and automated access to all the notes we write about them.

And yet, worse language than that of my attending appears in hospital notes all the time; there is research about this. Specifically, I’m talking about language that does not have high clinical utility but telegraphs the biases of the person writing the note. This is known as “stigmatizing language” and it can be overt or subtle.

For example, a physician wrote “I listed several fictitious medication names and she reported she was taking them.”

This casts suspicions about the patient’s credibility, as does the more subtle statement, “he claims nicotine patches don’t work for him.” Stigmatizing language may cast the patient in a difficult light, like this note: “she persevered on the fact that ... ‘you wouldn’t understand.’ ”

This stuff creeps into our medical notes because doctors are human, not AI — at least not yet — and our frustrations and biases are real. But could those frustrations and biases lead to medical errors? Even deaths? Stay with me.

We are going to start by defining a very sick patient population: those admitted to the hospital and who, within 48 hours, have either been transferred to the intensive care unit or died. Because of the severity of illness in this population we’ve just defined, figuring out whether a diagnostic or other error was made would be extremely high yield; these can mean the difference between life and death.

In a letter appearing in JAMA Internal Medicine, researchers examined a group of more than 2300 patients just like this from 29 hospitals, scouring the medical records for evidence of these types of errors.

Nearly one in four (23.2%) had at least one diagnostic error, which could include a missed physical exam finding, failure to ask a key question on history taking, inadequate testing, and so on.

Understanding why we make these errors is clearly critical to improving care for these patients. The researchers hypothesized that stigmatizing language might lead to errors like this. For example, by demonstrating that you don’t find a patient credible, you may ignore statements that would help make a better diagnosis.

Just over 5% of these patients had evidence of stigmatizing language in their medical notes. Like earlier studies, this language was more common if the patient was Black or had unstable housing.

Critically, stigmatizing language was more likely to be found among those who had diagnostic errors — a rate of 8.2% vs 4.1%. After adjustment for factors like race, the presence of stigmatizing language was associated with roughly a doubling of the risk for diagnostic errors.

Now, I’m all for eliminating stigmatizing language from our medical notes. And, given the increased transparency of all medical notes these days, I expect that we’ll see less of this over time. But of course, the fact that a physician doesn’t write something that disparages the patient does not necessarily mean that they don’t retain that bias. That said, those comments have an effect on all the other team members who care for that patient as well; it sets a tone and can entrench an individual’s bias more broadly. We should strive to eliminate our biases when it comes to caring for patients. But perhaps the second best thing is to work to keep those biases to ourselves.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>167742</fileName> <TBEID>0C04F9F5.SIG</TBEID> <TBUniqueIdentifier>MD_0C04F9F5</TBUniqueIdentifier> <newsOrJournal>News</newsOrJournal> <publisherName>Frontline Medical Communications</publisherName> <storyname/> <articleType>2</articleType> <TBLocation>QC Done-All Pubs</TBLocation> <QCDate>20240417T101846</QCDate> <firstPublished>20240417T103106</firstPublished> <LastPublished>20240417T103106</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20240417T103106</CMSDate> <articleSource/> <facebookInfo/> <meetingNumber/> <byline>F. Perry Wilson, MD</byline> <bylineText>F. PERRY WILSON, MD, MSCE</bylineText> <bylineFull>F. PERRY WILSON, MD, MSCE</bylineFull> <bylineTitleText/> <USOrGlobal/> <wireDocType/> <newsDocType>News</newsDocType> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:imng"> <name>IMNG Medical Media</name> <rightsInfo> <copyrightHolder> <name>Frontline Medical News</name> </copyrightHolder> <copyrightNotice>Copyright (c) 2015 Frontline Medical News, a Frontline Medical Communications Inc. company. All rights reserved. This material may not be published, broadcast, copied, or otherwise reproduced or distributed without the prior written permission of Frontline Medical Communications Inc.</copyrightNotice> </rightsInfo> </provider> <abstract/> <metaDescription>This stuff creeps into our medical notes because doctors are human, not AI — at least not yet — and our frustrations and biases are real. But could those frustr</metaDescription> <articlePDF/> <teaserImage/> <teaser>Dr. F. Perry Wilson comments on the potential of stigmatized language in medical records to lead to missed diagnoses and poor outcomes.</teaser> <title>‘Difficult Patient’: Stigmatizing Words and Medical Error</title> <deck/> <disclaimer/> <AuthorList/> <articleURL/> <doi/> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear/> <pubPubdateMonth/> <pubPubdateDay/> <pubVolume/> <pubNumber/> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>card</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>chph</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>endo</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>skin</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>fp</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>hemn</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>im</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>idprac</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>mdsurg</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> <journalTitle/> <journalFullTitle/> <copyrightStatement>2018 Frontline Medical Communications Inc.,</copyrightStatement> </publicationData> <publicationData> <publicationCode>nr</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> <journalTitle>Neurology Reviews</journalTitle> <journalFullTitle>Neurology Reviews</journalFullTitle> <copyrightStatement>2018 Frontline Medical Communications Inc.,</copyrightStatement> </publicationData> <publicationData> <publicationCode>ob</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>oncr</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>pn</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>rn</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> </publications_g> <publications> <term>5</term> <term>6</term> <term>34</term> <term>13</term> <term>15</term> <term>18</term> <term canonical="true">21</term> <term>20</term> <term>52226</term> <term>22</term> <term>23</term> <term>31</term> <term>25</term> <term>26</term> </publications> <sections> <term>39313</term> <term canonical="true">52</term> </sections> <topics> <term canonical="true">38029</term> <term>278</term> </topics> <links/> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>‘Difficult Patient’: Stigmatizing Words and Medical Error</title> <deck/> </itemMeta> <itemContent> <p><em>This transcript has been edited for clarity</em>.<br/><br/>When I was doing my nephrology training, I had an attending who would write notes that were, well, kind of funny. I remember one time we were seeing a patient whose first name was “Lucky.” He dryly opened his section of the consult note as follows: “This is a 56-year-old woman with an ironic name who presents with acute renal failure.”</p> <p>As an exhausted renal fellow, I appreciated the bit of color amid the ongoing series of tragedies that was the consult service. But let’s be clear — writing like this in the medical record is not a good idea. It wasn’t a good idea then, when any record might end up disclosed during a malpractice suit, and it’s really not a good idea now, <a href="https://www.bmj.com/content/372/bmj.n426">when patients have ready and automated access to all the notes we write about them</a>.<br/><br/>And yet, worse language than that of my attending appears in hospital notes <a href="https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2781937">all the time</a>; there is research about this. Specifically, I’m talking about language that does not have high clinical utility but telegraphs the biases of the person writing the note. This is known as “stigmatizing language” and it can be overt or subtle.<br/><br/>For example, a physician wrote “I listed several fictitious medication names and she reported she was taking them.”<br/><br/>This casts suspicions about the patient’s credibility, as does the more subtle statement, “he claims nicotine patches don’t work for him.” Stigmatizing language may cast the patient in a difficult light, like this note: “she persevered on the fact that ... ‘you wouldn’t understand.’ ”<br/><br/><span class="tag metaDescription">This stuff creeps into our medical notes because doctors are human, not AI — at least not yet — and our frustrations and biases are real. But could those frustrations and biases lead to medical errors? Even deaths?</span> Stay with me.<br/><br/>We are going to start by defining a very sick patient population: those admitted to the hospital and who, within 48 hours, have either been transferred to the intensive care unit or died. Because of the severity of illness in this population we’ve just defined, figuring out whether a diagnostic or other error was made would be extremely high yield; these can mean the difference between life and death.<br/><br/>In <span class="Hyperlink"><a href="https://jamanetwork.com/journals/jamainternalmedicine/article-abstract/2817610">a letter</a></span> appearing in <em>JAMA Internal Medicine</em>, researchers examined a group of more than 2300 patients just like this from 29 hospitals, scouring the medical records for evidence of these types of errors.<br/><br/>Nearly one in four (23.2%) had at least one diagnostic error, which could include a missed physical exam finding, failure to ask a key question on history taking, inadequate testing, and so on.<br/><br/>Understanding why we make these errors is clearly critical to improving care for these patients. The researchers hypothesized that stigmatizing language might lead to errors like this. For example, by demonstrating that you don’t find a patient credible, you may ignore statements that would help make a better diagnosis.<br/><br/>Just over 5% of these patients had evidence of stigmatizing language in their medical notes. Like earlier studies, this language was more common if the patient was Black or had unstable housing.<br/><br/>Critically, stigmatizing language was more likely to be found among those who had diagnostic errors — a rate of 8.2% vs 4.1%. After adjustment for factors like race, the presence of stigmatizing language was associated with roughly a doubling of the risk for diagnostic errors.<br/><br/>Now, I’m all for eliminating stigmatizing language from our medical notes. And, given the increased transparency of all medical notes these days, I expect that we’ll see less of this over time. But of course, the fact that a physician doesn’t <span class="Emphasis">write</span> something that disparages the patient does not necessarily mean that they don’t retain that bias. That said, those comments have an effect on all the other team members who care for that patient as well; it sets a tone and can entrench an individual’s bias more broadly. We should strive to eliminate our biases when it comes to caring for patients. But perhaps the second best thing is to work to keep those biases to ourselves.<br/><br/></p> <p> <em> <span class="Emphasis">Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.</span> </em> </p> <p> <em>A version of this article appeared on <span class="Hyperlink"><a href="https://www.medscape.com/viewarticle/1000689">Medscape.com</a></span>.</em> </p> </itemContent> </newsItem> <newsItem> <itemMeta> <itemRole>teaser</itemRole> <itemClass>text</itemClass> <title/> <deck/> </itemMeta> <itemContent> </itemContent> </newsItem> </itemSet></root>
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

A Banned Chemical That Is Still Causing Cancer

Article Type
Changed
Sun, 04/07/2024 - 23:58

This transcript has been edited for clarity.

I’m going to tell you about a chemical that might cause cancer — one I suspect you haven’t heard of before.

These types of stories usually end with a call for regulation — to ban said chemical or substance, or to regulate it — but in this case, that has already happened. This new carcinogen I’m telling you about is actually an old chemical. And it has not been manufactured or legally imported in the US since 2013.

So, why bother? Because in this case, the chemical — or, really, a group of chemicals called polybrominated diphenyl ethers (PBDEs) — are still around: in our soil, in our food, and in our blood.

PBDEs are a group of compounds that confer flame-retardant properties to plastics, and they were used extensively in the latter part of the 20th century in electronic enclosures, business equipment, and foam cushioning in upholstery.

But there was a problem. They don’t chemically bond to plastics; they are just sort of mixed in, which means they can leach out. They are hydrophobic, meaning they don’t get washed out of soil, and, when ingested or inhaled by humans, they dissolve in our fat stores, making it difficult for our normal excretory systems to excrete them.

PBDEs biomagnify. Small animals can take them up from contaminated soil or water, and those animals are eaten by larger animals, which accumulate higher concentrations of the chemicals. This bioaccumulation increases as you move up the food web until you get to an apex predator — like you and me.

This is true of lots of chemicals, of course. The concern arises when these chemicals are toxic. To date, the toxicity data for PBDEs were pretty limited. There were some animal studies where rats were exposed to extremely high doses and they developed liver lesions — but I am always very wary of extrapolating high-dose rat toxicity studies to humans. There was also some suggestion that the chemicals could be endocrine disruptors, affecting breast and thyroid tissue.

What about cancer? In 2016, the International Agency for Research on Cancer concluded there was “inadequate evidence in humans for the carcinogencity of” PBDEs.

In the same report, though, they suggested PBDEs are “probably carcinogenic to humans” based on mechanistic studies.

In other words, we can’t prove they’re cancerous — but come on, they probably are.

Finally, we have some evidence that really pushes us toward the carcinogenic conclusion, in the form of this study, appearing in JAMA Network Open. It’s a nice bit of epidemiology leveraging the population-based National Health and Nutrition Examination Survey (NHANES).

Researchers measured PBDE levels in blood samples from 1100 people enrolled in NHANES in 2003 and 2004 and linked them to death records collected over the next 20 years or so.

The first thing to note is that the researchers were able to measure PBDEs in the blood samples. They were in there. They were detectable. And they were variable. Dividing the 1100 participants into low, medium, and high PBDE tertiles, you can see a nearly 10-fold difference across the population.

Importantly, not many baseline variables correlated with PBDE levels. People in the highest group were a bit younger but had a fairly similar sex distribution, race, ethnicity, education, income, physical activity, smoking status, and body mass index.

This is not a randomized trial, of course — but at least based on these data, exposure levels do seem fairly random, which is what you would expect from an environmental toxin that percolates up through the food chain. They are often somewhat indiscriminate.

This similarity in baseline characteristics between people with low or high blood levels of PBDE also allows us to make some stronger inferences about the observed outcomes. Let’s take a look at them.

After adjustment for baseline factors, individuals in the highest PBDE group had a 43% higher rate of death from any cause over the follow-up period. This was not enough to achieve statistical significance, but it was close.

167538_photo.jpg


But the key finding is deaths due to cancer. After adjustment, cancer deaths occurred four times as frequently among those in the high PBDE group, and that is a statistically significant difference.

To be fair, cancer deaths were rare in this cohort. The vast majority of people did not die of anything during the follow-up period regardless of PBDE level. But the data are strongly suggestive of the carcinogenicity of these chemicals.

I should also point out that the researchers are linking the PBDE level at a single time point to all these future events. If PBDE levels remain relatively stable within an individual over time, that’s fine, but if they tend to vary with intake of different foods for example, this would not be captured and would actually lead to an underestimation of the cancer risk.

The researchers also didn’t have granular enough data to determine the type of cancer, but they do show that rates are similar between men and women, which might point away from the more sex-specific cancer etiologies. Clearly, some more work is needed.

Of course, I started this piece by telling you that these chemicals are already pretty much banned in the United States. What are we supposed to do about these findings? Studies have examined the primary ongoing sources of PBDE in our environment and it seems like most of our exposure will be coming from the food we eat due to that biomagnification thing: high-fat fish, meat and dairy products, and fish oil supplements. It may be worth some investigation into the relative adulteration of these products with this new old carcinogen.
 

Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity.

I’m going to tell you about a chemical that might cause cancer — one I suspect you haven’t heard of before.

These types of stories usually end with a call for regulation — to ban said chemical or substance, or to regulate it — but in this case, that has already happened. This new carcinogen I’m telling you about is actually an old chemical. And it has not been manufactured or legally imported in the US since 2013.

So, why bother? Because in this case, the chemical — or, really, a group of chemicals called polybrominated diphenyl ethers (PBDEs) — are still around: in our soil, in our food, and in our blood.

PBDEs are a group of compounds that confer flame-retardant properties to plastics, and they were used extensively in the latter part of the 20th century in electronic enclosures, business equipment, and foam cushioning in upholstery.

But there was a problem. They don’t chemically bond to plastics; they are just sort of mixed in, which means they can leach out. They are hydrophobic, meaning they don’t get washed out of soil, and, when ingested or inhaled by humans, they dissolve in our fat stores, making it difficult for our normal excretory systems to excrete them.

PBDEs biomagnify. Small animals can take them up from contaminated soil or water, and those animals are eaten by larger animals, which accumulate higher concentrations of the chemicals. This bioaccumulation increases as you move up the food web until you get to an apex predator — like you and me.

This is true of lots of chemicals, of course. The concern arises when these chemicals are toxic. To date, the toxicity data for PBDEs were pretty limited. There were some animal studies where rats were exposed to extremely high doses and they developed liver lesions — but I am always very wary of extrapolating high-dose rat toxicity studies to humans. There was also some suggestion that the chemicals could be endocrine disruptors, affecting breast and thyroid tissue.

What about cancer? In 2016, the International Agency for Research on Cancer concluded there was “inadequate evidence in humans for the carcinogencity of” PBDEs.

In the same report, though, they suggested PBDEs are “probably carcinogenic to humans” based on mechanistic studies.

In other words, we can’t prove they’re cancerous — but come on, they probably are.

Finally, we have some evidence that really pushes us toward the carcinogenic conclusion, in the form of this study, appearing in JAMA Network Open. It’s a nice bit of epidemiology leveraging the population-based National Health and Nutrition Examination Survey (NHANES).

Researchers measured PBDE levels in blood samples from 1100 people enrolled in NHANES in 2003 and 2004 and linked them to death records collected over the next 20 years or so.

The first thing to note is that the researchers were able to measure PBDEs in the blood samples. They were in there. They were detectable. And they were variable. Dividing the 1100 participants into low, medium, and high PBDE tertiles, you can see a nearly 10-fold difference across the population.

Importantly, not many baseline variables correlated with PBDE levels. People in the highest group were a bit younger but had a fairly similar sex distribution, race, ethnicity, education, income, physical activity, smoking status, and body mass index.

This is not a randomized trial, of course — but at least based on these data, exposure levels do seem fairly random, which is what you would expect from an environmental toxin that percolates up through the food chain. They are often somewhat indiscriminate.

This similarity in baseline characteristics between people with low or high blood levels of PBDE also allows us to make some stronger inferences about the observed outcomes. Let’s take a look at them.

After adjustment for baseline factors, individuals in the highest PBDE group had a 43% higher rate of death from any cause over the follow-up period. This was not enough to achieve statistical significance, but it was close.

167538_photo.jpg


But the key finding is deaths due to cancer. After adjustment, cancer deaths occurred four times as frequently among those in the high PBDE group, and that is a statistically significant difference.

To be fair, cancer deaths were rare in this cohort. The vast majority of people did not die of anything during the follow-up period regardless of PBDE level. But the data are strongly suggestive of the carcinogenicity of these chemicals.

I should also point out that the researchers are linking the PBDE level at a single time point to all these future events. If PBDE levels remain relatively stable within an individual over time, that’s fine, but if they tend to vary with intake of different foods for example, this would not be captured and would actually lead to an underestimation of the cancer risk.

The researchers also didn’t have granular enough data to determine the type of cancer, but they do show that rates are similar between men and women, which might point away from the more sex-specific cancer etiologies. Clearly, some more work is needed.

Of course, I started this piece by telling you that these chemicals are already pretty much banned in the United States. What are we supposed to do about these findings? Studies have examined the primary ongoing sources of PBDE in our environment and it seems like most of our exposure will be coming from the food we eat due to that biomagnification thing: high-fat fish, meat and dairy products, and fish oil supplements. It may be worth some investigation into the relative adulteration of these products with this new old carcinogen.
 

Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity.

I’m going to tell you about a chemical that might cause cancer — one I suspect you haven’t heard of before.

These types of stories usually end with a call for regulation — to ban said chemical or substance, or to regulate it — but in this case, that has already happened. This new carcinogen I’m telling you about is actually an old chemical. And it has not been manufactured or legally imported in the US since 2013.

So, why bother? Because in this case, the chemical — or, really, a group of chemicals called polybrominated diphenyl ethers (PBDEs) — are still around: in our soil, in our food, and in our blood.

PBDEs are a group of compounds that confer flame-retardant properties to plastics, and they were used extensively in the latter part of the 20th century in electronic enclosures, business equipment, and foam cushioning in upholstery.

But there was a problem. They don’t chemically bond to plastics; they are just sort of mixed in, which means they can leach out. They are hydrophobic, meaning they don’t get washed out of soil, and, when ingested or inhaled by humans, they dissolve in our fat stores, making it difficult for our normal excretory systems to excrete them.

PBDEs biomagnify. Small animals can take them up from contaminated soil or water, and those animals are eaten by larger animals, which accumulate higher concentrations of the chemicals. This bioaccumulation increases as you move up the food web until you get to an apex predator — like you and me.

This is true of lots of chemicals, of course. The concern arises when these chemicals are toxic. To date, the toxicity data for PBDEs were pretty limited. There were some animal studies where rats were exposed to extremely high doses and they developed liver lesions — but I am always very wary of extrapolating high-dose rat toxicity studies to humans. There was also some suggestion that the chemicals could be endocrine disruptors, affecting breast and thyroid tissue.

What about cancer? In 2016, the International Agency for Research on Cancer concluded there was “inadequate evidence in humans for the carcinogencity of” PBDEs.

In the same report, though, they suggested PBDEs are “probably carcinogenic to humans” based on mechanistic studies.

In other words, we can’t prove they’re cancerous — but come on, they probably are.

Finally, we have some evidence that really pushes us toward the carcinogenic conclusion, in the form of this study, appearing in JAMA Network Open. It’s a nice bit of epidemiology leveraging the population-based National Health and Nutrition Examination Survey (NHANES).

Researchers measured PBDE levels in blood samples from 1100 people enrolled in NHANES in 2003 and 2004 and linked them to death records collected over the next 20 years or so.

The first thing to note is that the researchers were able to measure PBDEs in the blood samples. They were in there. They were detectable. And they were variable. Dividing the 1100 participants into low, medium, and high PBDE tertiles, you can see a nearly 10-fold difference across the population.

Importantly, not many baseline variables correlated with PBDE levels. People in the highest group were a bit younger but had a fairly similar sex distribution, race, ethnicity, education, income, physical activity, smoking status, and body mass index.

This is not a randomized trial, of course — but at least based on these data, exposure levels do seem fairly random, which is what you would expect from an environmental toxin that percolates up through the food chain. They are often somewhat indiscriminate.

This similarity in baseline characteristics between people with low or high blood levels of PBDE also allows us to make some stronger inferences about the observed outcomes. Let’s take a look at them.

After adjustment for baseline factors, individuals in the highest PBDE group had a 43% higher rate of death from any cause over the follow-up period. This was not enough to achieve statistical significance, but it was close.

167538_photo.jpg


But the key finding is deaths due to cancer. After adjustment, cancer deaths occurred four times as frequently among those in the high PBDE group, and that is a statistically significant difference.

To be fair, cancer deaths were rare in this cohort. The vast majority of people did not die of anything during the follow-up period regardless of PBDE level. But the data are strongly suggestive of the carcinogenicity of these chemicals.

I should also point out that the researchers are linking the PBDE level at a single time point to all these future events. If PBDE levels remain relatively stable within an individual over time, that’s fine, but if they tend to vary with intake of different foods for example, this would not be captured and would actually lead to an underestimation of the cancer risk.

The researchers also didn’t have granular enough data to determine the type of cancer, but they do show that rates are similar between men and women, which might point away from the more sex-specific cancer etiologies. Clearly, some more work is needed.

Of course, I started this piece by telling you that these chemicals are already pretty much banned in the United States. What are we supposed to do about these findings? Studies have examined the primary ongoing sources of PBDE in our environment and it seems like most of our exposure will be coming from the food we eat due to that biomagnification thing: high-fat fish, meat and dairy products, and fish oil supplements. It may be worth some investigation into the relative adulteration of these products with this new old carcinogen.
 

Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>167538</fileName> <TBEID>0C04F5CB.SIG</TBEID> <TBUniqueIdentifier>MD_0C04F5CB</TBUniqueIdentifier> <newsOrJournal>News</newsOrJournal> <publisherName>Frontline Medical Communications</publisherName> <storyname/> <articleType>353</articleType> <TBLocation>QC Done-All Pubs</TBLocation> <QCDate>20240402T170345</QCDate> <firstPublished>20240403T091450</firstPublished> <LastPublished>20240403T091450</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20240403T091450</CMSDate> <articleSource/> <facebookInfo/> <meetingNumber/> <byline>F. Perry Wilson, MD, MSCE</byline> <bylineText>F. PERRY WILSON, MD, MSCE</bylineText> <bylineFull>F. PERRY WILSON, MD, MSCE</bylineFull> <bylineTitleText/> <USOrGlobal/> <wireDocType/> <newsDocType>Opinion</newsDocType> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:imng"> <name>IMNG Medical Media</name> <rightsInfo> <copyrightHolder> <name>Frontline Medical News</name> </copyrightHolder> <copyrightNotice>Copyright (c) 2015 Frontline Medical News, a Frontline Medical Communications Inc. company. All rights reserved. This material may not be published, broadcast, copied, or otherwise reproduced or distributed without the prior written permission of Frontline Medical Communications Inc.</copyrightNotice> </rightsInfo> </provider> <abstract/> <metaDescription>I’m going to tell you about a chemical that might cause cancer — one I suspect you haven’t heard of before.</metaDescription> <articlePDF/> <teaserImage>300971</teaserImage> <teaser>This carcinogen ‘is still around: in our soil, in our food, and in our blood.’</teaser> <title>A Banned Chemical That Is Still Causing Cancer</title> <deck/> <disclaimer/> <AuthorList/> <articleURL/> <doi/> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear/> <pubPubdateMonth/> <pubPubdateDay/> <pubVolume/> <pubNumber/> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>oncr</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>pn</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>im</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>fp</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>hemn</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> </publications_g> <publications> <term canonical="true">31</term> <term>25</term> <term>21</term> <term>15</term> <term>18</term> </publications> <sections> <term canonical="true">52</term> <term>41022</term> </sections> <topics> <term>192</term> <term>198</term> <term>61821</term> <term>59244</term> <term>67020</term> <term>214</term> <term>217</term> <term>221</term> <term>238</term> <term>240</term> <term>242</term> <term>244</term> <term>39570</term> <term>245</term> <term>270</term> <term canonical="true">280</term> <term>278</term> <term>27442</term> <term>31848</term> <term>292</term> <term>263</term> <term>38029</term> <term>178</term> <term>179</term> <term>181</term> <term>59374</term> <term>196</term> <term>197</term> <term>37637</term> <term>233</term> <term>243</term> <term>250</term> <term>303</term> </topics> <links> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/240127c0.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Dr. Wilson</description> </link> </links> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>A Banned Chemical That Is Still Causing Cancer</title> <deck/> </itemMeta> <itemContent> <p><em>This transcript has been edited for clarity</em>.<br/><br/><span class="tag metaDescription">I’m going to tell you about a chemical that might cause cancer — one I suspect you haven’t heard of before.</span></p> <p>These types of stories usually end with a call for regulation — to ban said chemical or substance, or to regulate it — but in this case, that has already happened. This new carcinogen I’m telling you about is actually an old chemical. And it <a href="https://www.epa.gov/assessing-and-managing-chemicals-under-tsca/polybrominated-diphenyl-ethers-pbdes">has not been manufactured or legally imported in the US since 2013</a>.<br/><br/>So, why bother? Because in this case, the chemical — or, really, a group of chemicals called polybrominated diphenyl ethers (PBDEs) — are still around: in our soil, in our food, and in our blood.<br/><br/><a href="https://wwwn.cdc.gov/TSP/ToxFAQs/ToxFAQsDetails.aspx?faqid=1462&amp;toxid=183">PBDEs</a> are a group of compounds that confer flame-retardant properties to plastics, and they were used extensively in the latter part of the 20th century in electronic enclosures, business equipment, and foam cushioning in upholstery.<br/><br/>But there was a problem. They don’t chemically bond to plastics; they are just sort of mixed in, which means they can leach out. They are hydrophobic, meaning they don’t get washed out of soil, and, when ingested or inhaled by humans, they dissolve in our fat stores, making it difficult for our normal excretory systems to excrete them.<br/><br/>PBDEs <a href="https://www.sciencedirect.com/science/article/pii/S0048969706001112">biomagnify</a>. Small animals can take them up from contaminated soil or water, and those animals are eaten by larger animals, which accumulate higher concentrations of the chemicals. This bioaccumulation increases as you move up the food web until you get to an apex predator — like you and me.<br/><br/>This is true of lots of chemicals, of course. The concern arises when these chemicals are toxic. To date, the toxicity data for PBDEs were pretty limited. There were some <a href="https://link.springer.com/article/10.1007/s00204-018-2292-y">animal studies</a> where rats were exposed to extremely high doses and they developed liver lesions — but I am always very wary of extrapolating high-dose rat toxicity studies to humans. There was also some suggestion that the chemicals could be endocrine disruptors, <a href="https://ehp.niehs.nih.gov/doi/abs/10.1289/ehp.01109399">affecting breast and thyroid tissue</a>.<br/><br/>What about cancer? In 2016, the International Agency for Research on Cancer <span class="Hyperlink"><a href="https://publications.iarc.fr/131">concluded</a></span> there was “inadequate evidence in humans for the carcinogencity of” PBDEs.<br/><br/>In the same report, though, they suggested PBDEs are “probably carcinogenic to humans” based on mechanistic studies.<br/><br/>In other words, we can’t prove they’re cancerous — but come on, they probably are.<br/><br/>Finally, we have some evidence that really pushes us toward the carcinogenic conclusion, in the form of <span class="Hyperlink"><a href="https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2816783">this study</a></span>, appearing in <em>JAMA Network Open</em>. It’s a nice bit of epidemiology leveraging the population-based National Health and Nutrition Examination Survey (NHANES).<br/><br/>Researchers measured PBDE levels in blood samples from 1100 people enrolled in NHANES in 2003 and 2004 and linked them to death records collected over the next 20 years or so.<br/><br/>The first thing to note is that the researchers were able to measure PBDEs in the blood samples. They were in there. They were detectable. And they were variable. Dividing the 1100 participants into low, medium, and high PBDE tertiles, you can see a nearly 10-fold difference across the population.<br/><br/>Importantly, not many baseline variables correlated with PBDE levels. People in the highest group were a bit younger but had a fairly similar sex distribution, race, ethnicity, education, income, physical activity, smoking status, and body mass index.<br/><br/>This is not a randomized trial, of course — but at least based on these data, exposure levels do seem fairly random, which is what you would expect from an environmental toxin that percolates up through the food chain. They are often somewhat indiscriminate.<br/><br/>This similarity in baseline characteristics between people with low or high blood levels of PBDE also allows us to make some stronger inferences about the observed outcomes. Let’s take a look at them.<br/><br/>After adjustment for baseline factors, individuals in the highest PBDE group had a 43% higher rate of death from any cause over the follow-up period. This was not enough to achieve statistical significance, but it was close. <br/><br/>[[{"fid":"300971","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"PBDE Outcomes","field_file_image_credit[und][0][value]":"Dr. Wilson","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>But the key finding is deaths due to cancer. After adjustment, cancer deaths occurred four times as frequently among those in the high PBDE group, and that is a statistically significant difference.<br/><br/>To be fair, cancer deaths were rare in this cohort. The vast majority of people did not die of anything during the follow-up period regardless of PBDE level. But the data are strongly suggestive of the carcinogenicity of these chemicals.<br/><br/>I should also point out that the researchers are linking the PBDE level at a single time point to all these future events. If PBDE levels remain relatively stable within an individual over time, that’s fine, but if they tend to vary with intake of different foods for example, this would not be captured and would actually lead to an underestimation of the cancer risk.<br/><br/>The researchers also didn’t have granular enough data to determine the type of cancer, but they do show that rates are similar between men and women, which might point away from the more sex-specific cancer etiologies. Clearly, some more work is needed.<br/><br/>Of course, I started this piece by telling you that these chemicals are already pretty much banned in the United States. What are we supposed to do about these findings? Studies have examined the primary ongoing sources of PBDE in our environment and it seems like most of our exposure will be coming from the food we eat due to that biomagnification thing: high-fat fish, meat and dairy products, and fish oil supplements. It may be worth some investigation into the relative adulteration of these products with this new old carcinogen.<br/><br/></p> <p> <em>Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships. </em> </p> <p> <em>A version of this article appeared on <span class="Hyperlink"><a href="https://www.medscape.com/viewarticle/1000605">Medscape.com</a></span>.</em> </p> </itemContent> </newsItem> <newsItem> <itemMeta> <itemRole>teaser</itemRole> <itemClass>text</itemClass> <title/> <deck/> </itemMeta> <itemContent> </itemContent> </newsItem> </itemSet></root>
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Vitamin D Supplements May Be a Double-Edged Sword

Article Type
Changed
Tue, 03/19/2024 - 13:41

This transcript has been edited for clarity.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.

Imagine, if you will, the great Cathedral of Our Lady of Correlation. You walk through the majestic oak doors depicting the link between ice cream sales and shark attacks, past the rose window depicting the cardiovascular benefits of red wine, and down the aisles frescoed in dramatic images showing how Facebook usage is associated with less life satisfaction. And then you reach the altar, the holy of holies where, emblazoned in shimmering pyrite, you see the patron saint of this church: vitamin D.

Yes, if you’ve watched this space, then you know that I have little truck with the wildly popular supplement. In all of clinical research, I believe that there is no molecule with stronger data for correlation and weaker data for causation.

Low serum vitamin D levels have been linked to higher risks for heart disease, cancer, falls, COVID, dementia, C diff, and others. And yet, when we do randomized trials of vitamin D supplementation — the thing that can prove that the low level was causally linked to the outcome of interest — we get negative results.

167279_1.JPG


Trials aren’t perfect, of course, and we’ll talk in a moment about a big one that had some issues. But we are at a point where we need to either be vitamin D apologists, saying, “Forget what those lying RCTs tell you and buy this supplement” — an $800 million-a-year industry, by the way — or conclude that vitamin D levels are a convenient marker of various lifestyle factors that are associated with better outcomes: markers of exercise, getting outside, eating a varied diet.

Or perhaps vitamin D supplements have real effects. It’s just that the beneficial effects are matched by the harmful ones. Stay tuned.

The Women’s Health Initiative remains among the largest randomized trials of vitamin D and calcium supplementation ever conducted — and a major contributor to the negative outcomes of vitamin D trials.

But if you dig into the inclusion and exclusion criteria for this trial, you’ll find that individuals were allowed to continue taking vitamins and supplements while they were in the trial, regardless of their randomization status. In fact, the majority took supplements at baseline, and more took supplements over time.

167279_2.JPG


That means, of course, that people in the placebo group, who were getting sugar pills instead of vitamin D and calcium, may have been taking vitamin D and calcium on the side. That would certainly bias the results of the trial toward the null, which is what the primary analyses showed. To wit, the original analysis of the Women’s Health Initiative trial showed no effect of randomization to vitamin D supplementation on improving cancer or cardiovascular outcomes.

But the Women’s Health Initiative trial started 30 years ago. Today, with the benefit of decades of follow-up, we can re-investigate — and perhaps re-litigate — those findings, courtesy of this study, “Long-Term Effect of Randomization to Calcium and Vitamin D Supplementation on Health in Older Women” appearing in Annals of Internal Medicine.

Dr Cynthia Thomson, of the Mel and Enid Zuckerman College of Public Health at the University of Arizona, and colleagues led this updated analysis focused on two findings that had been hinted at, but not statistically confirmed, in other vitamin D studies: a potential for the supplement to reduce the risk for cancer, and a potential for it to increase the risk for heart disease.

The randomized trial itself only lasted 7 years. What we are seeing in this analysis of 36,282 women is outcomes that happened at any time from randomization to the end of 2023 — around 20 years after the randomization to supplementation stopped. But, the researchers would argue, that’s probably okay. Cancer and heart disease take time to develop; we see lung cancer long after people stop smoking. So a history of consistent vitamin D supplementation may indeed be protective — or harmful.

Here are the top-line results. Those randomized to vitamin D and calcium supplementation had a 7% reduction in the rate of death from cancer, driven primarily by a reduction in colorectal cancer. This was statistically significant. Also statistically significant? Those randomized to supplementation had a 6% increase in the rate of death from cardiovascular disease. Put those findings together and what do you get? Stone-cold nothing, in terms of overall mortality.

167279_3.JPG


Okay, you say, but what about all that supplementation that was happening outside of the context of the trial, biasing our results toward the null?

The researchers finally clue us in.

First of all, I’ll tell you that, yes, people who were supplementing outside of the trial had higher baseline vitamin D levels — a median of 54.5 nmol/L vs 32.8 nmol/L. This may be because they were supplementing with vitamin D, but it could also be because people who take supplements tend to do other healthy things — another correlation to add to the great cathedral.

To get a better view of the real effects of randomization, the authors restricted the analysis to just those who did not use outside supplements. If vitamin D supplements help, then these are the people they should help. This group had about a 11% reduction in the incidence of cancer — statistically significant — and a 7% reduction in cancer mortality that did not meet the bar for statistical significance.

167279_4.JPG


There was no increase in cardiovascular disease among this group. But this small effect on cancer was nowhere near enough to significantly reduce the rate of all-cause mortality.

167279_5.JPG


Among those using supplements, vitamin D supplementation didn’t really move the needle on any outcome.

I know what you’re thinking: How many of these women were vitamin D deficient when we got started? These results may simply be telling us that people who have normal vitamin D levels are fine to go without supplementation.

Nearly three fourths of women who were not taking supplements entered the trial with vitamin D levels below the 50 nmol/L cutoff that the authors suggest would qualify for deficiency. Around half of those who used supplements were deficient. And yet, frustratingly, I could not find data on the effect of randomization to supplementation stratified by baseline vitamin D level. I even reached out to Dr Thomson to ask about this. She replied, “We did not stratify on baseline values because the numbers are too small statistically to test this.” Sorry.

In the meantime, I can tell you that for your “average woman,” vitamin D supplementation likely has no effect on mortality. It might modestly reduce the risk for certain cancers while increasing the risk for heart disease (probably through coronary calcification). So, there might be some room for personalization here. Perhaps women with a strong family history of cancer or other risk factors would do better with supplements, and those with a high risk for heart disease would do worse. Seems like a strategy that could be tested in a clinical trial. But maybe we could ask the participants to give up their extracurricular supplement use before they enter the trial. F. Perry Wilson, MD, MSCE, has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.
 

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and his bookHow Medicine Works and When It Doesn’tis available now.

Publications
Topics
Sections

This transcript has been edited for clarity.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.

Imagine, if you will, the great Cathedral of Our Lady of Correlation. You walk through the majestic oak doors depicting the link between ice cream sales and shark attacks, past the rose window depicting the cardiovascular benefits of red wine, and down the aisles frescoed in dramatic images showing how Facebook usage is associated with less life satisfaction. And then you reach the altar, the holy of holies where, emblazoned in shimmering pyrite, you see the patron saint of this church: vitamin D.

Yes, if you’ve watched this space, then you know that I have little truck with the wildly popular supplement. In all of clinical research, I believe that there is no molecule with stronger data for correlation and weaker data for causation.

Low serum vitamin D levels have been linked to higher risks for heart disease, cancer, falls, COVID, dementia, C diff, and others. And yet, when we do randomized trials of vitamin D supplementation — the thing that can prove that the low level was causally linked to the outcome of interest — we get negative results.

167279_1.JPG


Trials aren’t perfect, of course, and we’ll talk in a moment about a big one that had some issues. But we are at a point where we need to either be vitamin D apologists, saying, “Forget what those lying RCTs tell you and buy this supplement” — an $800 million-a-year industry, by the way — or conclude that vitamin D levels are a convenient marker of various lifestyle factors that are associated with better outcomes: markers of exercise, getting outside, eating a varied diet.

Or perhaps vitamin D supplements have real effects. It’s just that the beneficial effects are matched by the harmful ones. Stay tuned.

The Women’s Health Initiative remains among the largest randomized trials of vitamin D and calcium supplementation ever conducted — and a major contributor to the negative outcomes of vitamin D trials.

But if you dig into the inclusion and exclusion criteria for this trial, you’ll find that individuals were allowed to continue taking vitamins and supplements while they were in the trial, regardless of their randomization status. In fact, the majority took supplements at baseline, and more took supplements over time.

167279_2.JPG


That means, of course, that people in the placebo group, who were getting sugar pills instead of vitamin D and calcium, may have been taking vitamin D and calcium on the side. That would certainly bias the results of the trial toward the null, which is what the primary analyses showed. To wit, the original analysis of the Women’s Health Initiative trial showed no effect of randomization to vitamin D supplementation on improving cancer or cardiovascular outcomes.

But the Women’s Health Initiative trial started 30 years ago. Today, with the benefit of decades of follow-up, we can re-investigate — and perhaps re-litigate — those findings, courtesy of this study, “Long-Term Effect of Randomization to Calcium and Vitamin D Supplementation on Health in Older Women” appearing in Annals of Internal Medicine.

Dr Cynthia Thomson, of the Mel and Enid Zuckerman College of Public Health at the University of Arizona, and colleagues led this updated analysis focused on two findings that had been hinted at, but not statistically confirmed, in other vitamin D studies: a potential for the supplement to reduce the risk for cancer, and a potential for it to increase the risk for heart disease.

The randomized trial itself only lasted 7 years. What we are seeing in this analysis of 36,282 women is outcomes that happened at any time from randomization to the end of 2023 — around 20 years after the randomization to supplementation stopped. But, the researchers would argue, that’s probably okay. Cancer and heart disease take time to develop; we see lung cancer long after people stop smoking. So a history of consistent vitamin D supplementation may indeed be protective — or harmful.

Here are the top-line results. Those randomized to vitamin D and calcium supplementation had a 7% reduction in the rate of death from cancer, driven primarily by a reduction in colorectal cancer. This was statistically significant. Also statistically significant? Those randomized to supplementation had a 6% increase in the rate of death from cardiovascular disease. Put those findings together and what do you get? Stone-cold nothing, in terms of overall mortality.

167279_3.JPG


Okay, you say, but what about all that supplementation that was happening outside of the context of the trial, biasing our results toward the null?

The researchers finally clue us in.

First of all, I’ll tell you that, yes, people who were supplementing outside of the trial had higher baseline vitamin D levels — a median of 54.5 nmol/L vs 32.8 nmol/L. This may be because they were supplementing with vitamin D, but it could also be because people who take supplements tend to do other healthy things — another correlation to add to the great cathedral.

To get a better view of the real effects of randomization, the authors restricted the analysis to just those who did not use outside supplements. If vitamin D supplements help, then these are the people they should help. This group had about a 11% reduction in the incidence of cancer — statistically significant — and a 7% reduction in cancer mortality that did not meet the bar for statistical significance.

167279_4.JPG


There was no increase in cardiovascular disease among this group. But this small effect on cancer was nowhere near enough to significantly reduce the rate of all-cause mortality.

167279_5.JPG


Among those using supplements, vitamin D supplementation didn’t really move the needle on any outcome.

I know what you’re thinking: How many of these women were vitamin D deficient when we got started? These results may simply be telling us that people who have normal vitamin D levels are fine to go without supplementation.

Nearly three fourths of women who were not taking supplements entered the trial with vitamin D levels below the 50 nmol/L cutoff that the authors suggest would qualify for deficiency. Around half of those who used supplements were deficient. And yet, frustratingly, I could not find data on the effect of randomization to supplementation stratified by baseline vitamin D level. I even reached out to Dr Thomson to ask about this. She replied, “We did not stratify on baseline values because the numbers are too small statistically to test this.” Sorry.

In the meantime, I can tell you that for your “average woman,” vitamin D supplementation likely has no effect on mortality. It might modestly reduce the risk for certain cancers while increasing the risk for heart disease (probably through coronary calcification). So, there might be some room for personalization here. Perhaps women with a strong family history of cancer or other risk factors would do better with supplements, and those with a high risk for heart disease would do worse. Seems like a strategy that could be tested in a clinical trial. But maybe we could ask the participants to give up their extracurricular supplement use before they enter the trial. F. Perry Wilson, MD, MSCE, has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.
 

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and his bookHow Medicine Works and When It Doesn’tis available now.

This transcript has been edited for clarity.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.

Imagine, if you will, the great Cathedral of Our Lady of Correlation. You walk through the majestic oak doors depicting the link between ice cream sales and shark attacks, past the rose window depicting the cardiovascular benefits of red wine, and down the aisles frescoed in dramatic images showing how Facebook usage is associated with less life satisfaction. And then you reach the altar, the holy of holies where, emblazoned in shimmering pyrite, you see the patron saint of this church: vitamin D.

Yes, if you’ve watched this space, then you know that I have little truck with the wildly popular supplement. In all of clinical research, I believe that there is no molecule with stronger data for correlation and weaker data for causation.

Low serum vitamin D levels have been linked to higher risks for heart disease, cancer, falls, COVID, dementia, C diff, and others. And yet, when we do randomized trials of vitamin D supplementation — the thing that can prove that the low level was causally linked to the outcome of interest — we get negative results.

167279_1.JPG


Trials aren’t perfect, of course, and we’ll talk in a moment about a big one that had some issues. But we are at a point where we need to either be vitamin D apologists, saying, “Forget what those lying RCTs tell you and buy this supplement” — an $800 million-a-year industry, by the way — or conclude that vitamin D levels are a convenient marker of various lifestyle factors that are associated with better outcomes: markers of exercise, getting outside, eating a varied diet.

Or perhaps vitamin D supplements have real effects. It’s just that the beneficial effects are matched by the harmful ones. Stay tuned.

The Women’s Health Initiative remains among the largest randomized trials of vitamin D and calcium supplementation ever conducted — and a major contributor to the negative outcomes of vitamin D trials.

But if you dig into the inclusion and exclusion criteria for this trial, you’ll find that individuals were allowed to continue taking vitamins and supplements while they were in the trial, regardless of their randomization status. In fact, the majority took supplements at baseline, and more took supplements over time.

167279_2.JPG


That means, of course, that people in the placebo group, who were getting sugar pills instead of vitamin D and calcium, may have been taking vitamin D and calcium on the side. That would certainly bias the results of the trial toward the null, which is what the primary analyses showed. To wit, the original analysis of the Women’s Health Initiative trial showed no effect of randomization to vitamin D supplementation on improving cancer or cardiovascular outcomes.

But the Women’s Health Initiative trial started 30 years ago. Today, with the benefit of decades of follow-up, we can re-investigate — and perhaps re-litigate — those findings, courtesy of this study, “Long-Term Effect of Randomization to Calcium and Vitamin D Supplementation on Health in Older Women” appearing in Annals of Internal Medicine.

Dr Cynthia Thomson, of the Mel and Enid Zuckerman College of Public Health at the University of Arizona, and colleagues led this updated analysis focused on two findings that had been hinted at, but not statistically confirmed, in other vitamin D studies: a potential for the supplement to reduce the risk for cancer, and a potential for it to increase the risk for heart disease.

The randomized trial itself only lasted 7 years. What we are seeing in this analysis of 36,282 women is outcomes that happened at any time from randomization to the end of 2023 — around 20 years after the randomization to supplementation stopped. But, the researchers would argue, that’s probably okay. Cancer and heart disease take time to develop; we see lung cancer long after people stop smoking. So a history of consistent vitamin D supplementation may indeed be protective — or harmful.

Here are the top-line results. Those randomized to vitamin D and calcium supplementation had a 7% reduction in the rate of death from cancer, driven primarily by a reduction in colorectal cancer. This was statistically significant. Also statistically significant? Those randomized to supplementation had a 6% increase in the rate of death from cardiovascular disease. Put those findings together and what do you get? Stone-cold nothing, in terms of overall mortality.

167279_3.JPG


Okay, you say, but what about all that supplementation that was happening outside of the context of the trial, biasing our results toward the null?

The researchers finally clue us in.

First of all, I’ll tell you that, yes, people who were supplementing outside of the trial had higher baseline vitamin D levels — a median of 54.5 nmol/L vs 32.8 nmol/L. This may be because they were supplementing with vitamin D, but it could also be because people who take supplements tend to do other healthy things — another correlation to add to the great cathedral.

To get a better view of the real effects of randomization, the authors restricted the analysis to just those who did not use outside supplements. If vitamin D supplements help, then these are the people they should help. This group had about a 11% reduction in the incidence of cancer — statistically significant — and a 7% reduction in cancer mortality that did not meet the bar for statistical significance.

167279_4.JPG


There was no increase in cardiovascular disease among this group. But this small effect on cancer was nowhere near enough to significantly reduce the rate of all-cause mortality.

167279_5.JPG


Among those using supplements, vitamin D supplementation didn’t really move the needle on any outcome.

I know what you’re thinking: How many of these women were vitamin D deficient when we got started? These results may simply be telling us that people who have normal vitamin D levels are fine to go without supplementation.

Nearly three fourths of women who were not taking supplements entered the trial with vitamin D levels below the 50 nmol/L cutoff that the authors suggest would qualify for deficiency. Around half of those who used supplements were deficient. And yet, frustratingly, I could not find data on the effect of randomization to supplementation stratified by baseline vitamin D level. I even reached out to Dr Thomson to ask about this. She replied, “We did not stratify on baseline values because the numbers are too small statistically to test this.” Sorry.

In the meantime, I can tell you that for your “average woman,” vitamin D supplementation likely has no effect on mortality. It might modestly reduce the risk for certain cancers while increasing the risk for heart disease (probably through coronary calcification). So, there might be some room for personalization here. Perhaps women with a strong family history of cancer or other risk factors would do better with supplements, and those with a high risk for heart disease would do worse. Seems like a strategy that could be tested in a clinical trial. But maybe we could ask the participants to give up their extracurricular supplement use before they enter the trial. F. Perry Wilson, MD, MSCE, has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.
 

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and his bookHow Medicine Works and When It Doesn’tis available now.

Publications
Publications
Topics
Article Type
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>167279</fileName> <TBEID>0C04F002.SIG</TBEID> <TBUniqueIdentifier>MD_0C04F002</TBUniqueIdentifier> <newsOrJournal>News</newsOrJournal> <publisherName>Frontline Medical Communications</publisherName> <storyname/> <articleType>353</articleType> <TBLocation>QC Done-All Pubs</TBLocation> <QCDate>20240314T121544</QCDate> <firstPublished>20240314T122244</firstPublished> <LastPublished>20240314T122244</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20240314T122244</CMSDate> <articleSource/> <facebookInfo/> <meetingNumber/> <byline>F. Perry Wilson, MD</byline> <bylineText>F. PERRY WILSON, MD, MSCE</bylineText> <bylineFull>F. PERRY WILSON, MD, MSCE</bylineFull> <bylineTitleText/> <USOrGlobal/> <wireDocType/> <newsDocType/> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:imng"> <name>IMNG Medical Media</name> <rightsInfo> <copyrightHolder> <name>Frontline Medical News</name> </copyrightHolder> <copyrightNotice>Copyright (c) 2015 Frontline Medical News, a Frontline Medical Communications Inc. company. All rights reserved. This material may not be published, broadcast, copied, or otherwise reproduced or distributed without the prior written permission of Frontline Medical Communications Inc.</copyrightNotice> </rightsInfo> </provider> <abstract/> <metaDescription>Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.</metaDescription> <articlePDF/> <teaserImage>300737</teaserImage> <teaser>I can tell you that for your “average woman,” vitamin D supplementation likely has no effect on mortality.</teaser> <title>Vitamin D Supplements May Be a Double-Edged Sword</title> <deck/> <disclaimer/> <AuthorList/> <articleURL/> <doi/> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear/> <pubPubdateMonth/> <pubPubdateDay/> <pubVolume/> <pubNumber/> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>card</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>endo</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>fp</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>im</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>ob</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> </publications_g> <publications> <term>5</term> <term>34</term> <term canonical="true">15</term> <term>21</term> <term>23</term> </publications> <sections> <term canonical="true">52</term> <term>41022</term> </sections> <topics> <term>193</term> <term>266</term> <term>194</term> <term>263</term> <term>215</term> <term canonical="true">322</term> </topics> <links> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/2401273a.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">F. Perry Wilson, MD, MSCE</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/2401273b.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Annals of Internal Medicine</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/2401273c.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Annals of Internal Medicine</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/2401273d.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Annals of Internal Medicine</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/2401273e.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Annals of Internal Medicine</description> </link> </links> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>Vitamin D Supplements May Be a Double-Edged Sword</title> <deck/> </itemMeta> <itemContent> <p> <em>This transcript has been edited for clarity.</em> </p> <p>Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.</p> <p>Imagine, if you will, the great Cathedral of Our Lady of Correlation. You walk through the majestic oak doors depicting the link between ice cream sales and shark attacks, past the rose window depicting the cardiovascular benefits of red wine, and down the aisles frescoed in dramatic images showing how Facebook usage is associated with less life satisfaction. And then you reach the altar, the holy of holies where, emblazoned in shimmering pyrite, you see the patron saint of this church: <a href="https://reference.medscape.com/drug/drisdol-calciferol-vitamind-344417">vitamin D</a>.<br/><br/>Yes, if you’ve watched this space, then you know that I have <a href="https://www.medscape.com/viewarticle/939759">little truck with the wildly popular supplement</a>. In all of clinical research, I believe that there is no molecule with stronger data for correlation and weaker data for causation.<br/><br/>Low serum vitamin D levels have been linked to higher risks for heart disease, cancer, falls, COVID, dementia, <a href="https://emedicine.medscape.com/article/186458-overview">C diff</a>, and others. And yet, when we do randomized trials of vitamin D supplementation — the thing that can prove that the low level was causally linked to the outcome of interest — we get negative results.<br/><br/>[[{"fid":"300737","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"F. Perry Wilson, MD, MSCE","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>Trials aren’t perfect, of course, and we’ll talk in a moment about a big one that had some issues. But we are at a point where we need to either be vitamin D apologists, saying, “Forget what those lying RCTs tell you and buy this supplement” — <span class="Hyperlink"><a href="https://www.polarismarketresearch.com/industry-analysis/vitamin-d-market">an $800 million-a-year industry, by the way</a></span> — or conclude that vitamin D levels are a convenient marker of various lifestyle factors that are associated with better outcomes: markers of exercise, getting outside, eating a varied diet.<br/><br/>Or perhaps vitamin D supplements have real effects. It’s just that the beneficial effects are matched by the harmful ones. Stay tuned.<br/><br/>The <span class="Hyperlink"><a href="https://sp.whi.org/about/SitePages/Calcium%20and%20Vitamin%20D.aspx">Women’s Health Initiative</a></span> remains among the largest randomized trials of vitamin D and calcium supplementation ever conducted — and a major contributor to the negative outcomes of vitamin D trials.<br/><br/>But if you dig into the inclusion and exclusion criteria for this trial, you’ll find that individuals were allowed to continue taking vitamins and supplements while they were in the trial, regardless of their randomization status. In fact, the majority took supplements at baseline, and more took supplements over time.<br/><br/>[[{"fid":"300738","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"Annals of Internal Medicine","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>That means, of course, that people in the placebo group, who were getting sugar pills instead of vitamin D and calcium, may have been taking vitamin D and calcium on the side. That would certainly bias the results of the trial toward the null, which is what the primary analyses showed. To wit, the original analysis of the Women’s Health Initiative trial showed no effect of randomization to vitamin D supplementation on improving cancer or cardiovascular outcomes.<br/><br/>But the Women’s Health Initiative trial started 30 years ago. Today, with the benefit of decades of follow-up, we can re-investigate — and perhaps re-litigate — those findings, courtesy of <span class="Hyperlink"><a href="https://www.acpjournals.org/doi/10.7326/M23-2598">this study</a></span>, “Long-Term Effect of Randomization to Calcium and Vitamin D Supplementation on Health in Older Women” appearing in <span class="Emphasis">Annals of Internal Medicine</span>.<br/><br/>Dr Cynthia Thomson, of the Mel and Enid Zuckerman College of Public Health at the University of Arizona, and colleagues led this updated analysis focused on two findings that had been hinted at, but not statistically confirmed, in other vitamin D studies: a potential for the supplement to reduce the risk for cancer, and a potential for it to increase the risk for heart disease.<br/><br/>The randomized trial itself only lasted 7 years. What we are seeing in this analysis of 36,282 women is outcomes that happened at any time from randomization to the end of 2023 — around 20 years after the randomization to supplementation stopped. But, the researchers would argue, that’s probably okay. Cancer and heart disease take time to develop; we see lung cancer long after people stop smoking. So a history of consistent vitamin D supplementation may indeed be protective — or harmful.<br/><br/>Here are the top-line results. Those randomized to vitamin D and calcium supplementation had a 7% reduction in the rate of death from cancer, driven primarily by a reduction in <span class="Hyperlink"><a href="https://emedicine.medscape.com/article/2500006-overview">colorectal cancer</a></span>. This was statistically significant. Also statistically significant? Those randomized to supplementation had a 6% increase in the rate of death from cardiovascular disease. Put those findings together and what do you get? Stone-cold nothing, in terms of overall mortality.<br/><br/>[[{"fid":"300739","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"Annals of Internal Medicine","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>Okay, you say, but what about all that supplementation that was happening outside of the context of the trial, biasing our results toward the null?<br/><br/>The researchers finally clue us in.<br/><br/>First of all, I’ll tell you that, yes, people who were supplementing outside of the trial had higher baseline vitamin D levels — a median of 54.5 nmol/L vs 32.8 nmol/L. This may be because they were supplementing with vitamin D, but it could also be because people who take supplements tend to do other healthy things — another correlation to add to the great cathedral.<br/><br/>To get a better view of the real effects of randomization, the authors restricted the analysis to just those who did not use outside supplements. If vitamin D supplements help, then these are the people they should help. This group had about a 11% reduction in the incidence of cancer — statistically significant — and a 7% reduction in cancer mortality that did not meet the bar for statistical significance.<br/><br/>[[{"fid":"300740","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"Annals of Internal Medicine","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>There was no increase in cardiovascular disease among this group. But this small effect on cancer was nowhere near enough to significantly reduce the rate of all-cause mortality.<br/><br/>[[{"fid":"300741","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"Annals of Internal Medicine","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>Among those using supplements, vitamin D supplementation didn’t really move the needle on any outcome.<br/><br/>I know what you’re thinking: How many of these women were vitamin D deficient when we got started? These results may simply be telling us that people who have normal vitamin D levels are fine to go without supplementation.<br/><br/>Nearly three fourths of women who were not taking supplements entered the trial with vitamin D levels below the 50 nmol/L cutoff that the authors suggest would qualify for deficiency. Around half of those who used supplements were deficient. And yet, frustratingly, I could not find data on the effect of randomization to supplementation stratified by baseline vitamin D level. I even reached out to Dr Thomson to ask about this. She replied, “We did not stratify on baseline values because the numbers are too small statistically to test this.” Sorry.<br/><br/>In the meantime, I can tell you that for your “average woman,” vitamin D supplementation likely has no effect on mortality. It might modestly reduce the risk for certain cancers while increasing the risk for heart disease (probably through coronary calcification). So, there might be some room for personalization here. Perhaps women with a strong family history of cancer or other risk factors would do better with supplements, and those with a high risk for heart disease would do worse. Seems like a strategy that could be tested in a clinical trial. But maybe we could ask the participants to give up their extracurricular supplement use before they enter the trial. F. Perry Wilson, MD, MSCE, has disclosed no relevant financial relationships.<span class="end"/></p> <p> <em>A version of this article appeared on <span class="Hyperlink"><a href="https://www.medscape.com/viewarticle/1000303">Medscape.com</a></span>.<br/><br/></em> </p> <p> <em><span class="Emphasis">F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets </span><span class="Hyperlink"><a href="https://twitter.com/fperrywilson">@fperrywilson</a></span><span class="Emphasis"> and his book</span>, <span class="Hyperlink"><a href="https://www.hachettebookgroup.com/titles/f-perry-wilson-md/how-medicine-works-and-when-it-doesnt/9781538723623/?lens=grand-central-publishing">How Medicine Works and When It Doesn’t</a></span>, <span class="Emphasis">is available now</span>.</em> </p> </itemContent> </newsItem> <newsItem> <itemMeta> <itemRole>teaser</itemRole> <itemClass>text</itemClass> <title/> <deck/> </itemMeta> <itemContent> </itemContent> </newsItem> </itemSet></root>
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

COVID-19 Is a Very Weird Virus

Article Type
Changed
Tue, 03/12/2024 - 17:24

This transcript has been edited for clarity.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.

In the early days of the pandemic, before we really understood what COVID was, two specialties in the hospital had a foreboding sense that something was very strange about this virus. The first was the pulmonologists, who noticed the striking levels of hypoxemia — low oxygen in the blood — and the rapidity with which patients who had previously been stable would crash in the intensive care unit.

The second, and I mark myself among this group, were the nephrologists. The dialysis machines stopped working right. I remember rounding on patients in the hospital who were on dialysis for kidney failure in the setting of severe COVID infection and seeing clots forming on the dialysis filters. Some patients could barely get in a full treatment because the filters would clog so quickly.

We knew it was worse than flu because of the mortality rates, but these oddities made us realize that it was different too — not just a particularly nasty respiratory virus but one that had effects on the body that we hadn’t really seen before.

167174_PHOTO1_web.jpg


That’s why I’ve always been interested in studies that compare what happens to patients after COVID infection vs what happens to patients after other respiratory infections. This week, we’ll look at an intriguing study that suggests that COVID may lead to autoimmune diseases like rheumatoid arthritis, lupus, and vasculitis.

The study appears in the Annals of Internal Medicine and is made possible by the universal electronic health record systems of South Korea and Japan, who collaborated to create a truly staggering cohort of more than 20 million individuals living in those countries from 2020 to 2021.

The exposure of interest? COVID infection, experienced by just under 5% of that cohort over the study period. (Remember, there was a time when COVID infections were relatively controlled, particularly in some countries.)

167174_PHOTO2_web.jpg


The researchers wanted to compare the risk for autoimmune disease among COVID-infected individuals against two control groups. The first control group was the general population. This is interesting but a difficult analysis, because people who become infected with COVID might be very different from the general population. The second control group was people infected with influenza. I like this a lot better; the risk factors for COVID and influenza are quite similar, and the fact that this group was diagnosed with flu means at least that they are getting medical care and are sort of “in the system,” so to speak.

167174_PHOTO3_web.jpg


But it’s not enough to simply identify these folks and see who ends up with more autoimmune disease. The authors used propensity score matching to pair individuals infected with COVID with individuals from the control groups who were very similar to them. I’ve talked about this strategy before, but the basic idea is that you build a model predicting the likelihood of infection with COVID, based on a slew of factors — and the slew these authors used is pretty big, as shown below — and then stick people with similar risk for COVID together, with one member of the pair having had COVID and the other having eluded it (at least for the study period).

167174_PHOTO4_web.jpg


After this statistical balancing, the authors looked at the risk for a variety of autoimmune diseases.

Compared with those infected with flu, those infected with COVID were more likely to be diagnosed with any autoimmune condition, connective tissue disease, and, in Japan at least, inflammatory arthritis.

167174_PHOTO5_web.jpg


The authors acknowledge that being diagnosed with a disease might not be the same as actually having the disease, so in another analysis they looked only at people who received treatment for the autoimmune conditions, and the signals were even stronger in that group.

167174_PHOTO6_web.jpg


This risk seemed to be highest in the 6 months following the COVID infection, which makes sense biologically if we think that the infection is somehow screwing up the immune system.

167174_PHOTO7_web.jpg


And the risk was similar with both COVID variants circulating at the time of the study.

The only factor that reduced the risk? You guessed it: vaccination. This is a particularly interesting finding because the exposure cohort was defined by having been infected with COVID. Therefore, the mechanism of protection is not prevention of infection; it’s something else. Perhaps vaccination helps to get the immune system in a state to respond to COVID infection more… appropriately?

167174_PHOTO8_web.jpg


Yes, this study is observational. We can’t draw causal conclusions here. But it does reinforce my long-held belief that COVID is a weird virus, one with effects that are different from the respiratory viruses we are used to. I can’t say for certain whether COVID causes immune system dysfunction that puts someone at risk for autoimmunity — not from this study. But I can say it wouldn’t surprise me.

Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.

In the early days of the pandemic, before we really understood what COVID was, two specialties in the hospital had a foreboding sense that something was very strange about this virus. The first was the pulmonologists, who noticed the striking levels of hypoxemia — low oxygen in the blood — and the rapidity with which patients who had previously been stable would crash in the intensive care unit.

The second, and I mark myself among this group, were the nephrologists. The dialysis machines stopped working right. I remember rounding on patients in the hospital who were on dialysis for kidney failure in the setting of severe COVID infection and seeing clots forming on the dialysis filters. Some patients could barely get in a full treatment because the filters would clog so quickly.

We knew it was worse than flu because of the mortality rates, but these oddities made us realize that it was different too — not just a particularly nasty respiratory virus but one that had effects on the body that we hadn’t really seen before.

167174_PHOTO1_web.jpg


That’s why I’ve always been interested in studies that compare what happens to patients after COVID infection vs what happens to patients after other respiratory infections. This week, we’ll look at an intriguing study that suggests that COVID may lead to autoimmune diseases like rheumatoid arthritis, lupus, and vasculitis.

The study appears in the Annals of Internal Medicine and is made possible by the universal electronic health record systems of South Korea and Japan, who collaborated to create a truly staggering cohort of more than 20 million individuals living in those countries from 2020 to 2021.

The exposure of interest? COVID infection, experienced by just under 5% of that cohort over the study period. (Remember, there was a time when COVID infections were relatively controlled, particularly in some countries.)

167174_PHOTO2_web.jpg


The researchers wanted to compare the risk for autoimmune disease among COVID-infected individuals against two control groups. The first control group was the general population. This is interesting but a difficult analysis, because people who become infected with COVID might be very different from the general population. The second control group was people infected with influenza. I like this a lot better; the risk factors for COVID and influenza are quite similar, and the fact that this group was diagnosed with flu means at least that they are getting medical care and are sort of “in the system,” so to speak.

167174_PHOTO3_web.jpg


But it’s not enough to simply identify these folks and see who ends up with more autoimmune disease. The authors used propensity score matching to pair individuals infected with COVID with individuals from the control groups who were very similar to them. I’ve talked about this strategy before, but the basic idea is that you build a model predicting the likelihood of infection with COVID, based on a slew of factors — and the slew these authors used is pretty big, as shown below — and then stick people with similar risk for COVID together, with one member of the pair having had COVID and the other having eluded it (at least for the study period).

167174_PHOTO4_web.jpg


After this statistical balancing, the authors looked at the risk for a variety of autoimmune diseases.

Compared with those infected with flu, those infected with COVID were more likely to be diagnosed with any autoimmune condition, connective tissue disease, and, in Japan at least, inflammatory arthritis.

167174_PHOTO5_web.jpg


The authors acknowledge that being diagnosed with a disease might not be the same as actually having the disease, so in another analysis they looked only at people who received treatment for the autoimmune conditions, and the signals were even stronger in that group.

167174_PHOTO6_web.jpg


This risk seemed to be highest in the 6 months following the COVID infection, which makes sense biologically if we think that the infection is somehow screwing up the immune system.

167174_PHOTO7_web.jpg


And the risk was similar with both COVID variants circulating at the time of the study.

The only factor that reduced the risk? You guessed it: vaccination. This is a particularly interesting finding because the exposure cohort was defined by having been infected with COVID. Therefore, the mechanism of protection is not prevention of infection; it’s something else. Perhaps vaccination helps to get the immune system in a state to respond to COVID infection more… appropriately?

167174_PHOTO8_web.jpg


Yes, this study is observational. We can’t draw causal conclusions here. But it does reinforce my long-held belief that COVID is a weird virus, one with effects that are different from the respiratory viruses we are used to. I can’t say for certain whether COVID causes immune system dysfunction that puts someone at risk for autoimmunity — not from this study. But I can say it wouldn’t surprise me.

Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.

In the early days of the pandemic, before we really understood what COVID was, two specialties in the hospital had a foreboding sense that something was very strange about this virus. The first was the pulmonologists, who noticed the striking levels of hypoxemia — low oxygen in the blood — and the rapidity with which patients who had previously been stable would crash in the intensive care unit.

The second, and I mark myself among this group, were the nephrologists. The dialysis machines stopped working right. I remember rounding on patients in the hospital who were on dialysis for kidney failure in the setting of severe COVID infection and seeing clots forming on the dialysis filters. Some patients could barely get in a full treatment because the filters would clog so quickly.

We knew it was worse than flu because of the mortality rates, but these oddities made us realize that it was different too — not just a particularly nasty respiratory virus but one that had effects on the body that we hadn’t really seen before.

167174_PHOTO1_web.jpg


That’s why I’ve always been interested in studies that compare what happens to patients after COVID infection vs what happens to patients after other respiratory infections. This week, we’ll look at an intriguing study that suggests that COVID may lead to autoimmune diseases like rheumatoid arthritis, lupus, and vasculitis.

The study appears in the Annals of Internal Medicine and is made possible by the universal electronic health record systems of South Korea and Japan, who collaborated to create a truly staggering cohort of more than 20 million individuals living in those countries from 2020 to 2021.

The exposure of interest? COVID infection, experienced by just under 5% of that cohort over the study period. (Remember, there was a time when COVID infections were relatively controlled, particularly in some countries.)

167174_PHOTO2_web.jpg


The researchers wanted to compare the risk for autoimmune disease among COVID-infected individuals against two control groups. The first control group was the general population. This is interesting but a difficult analysis, because people who become infected with COVID might be very different from the general population. The second control group was people infected with influenza. I like this a lot better; the risk factors for COVID and influenza are quite similar, and the fact that this group was diagnosed with flu means at least that they are getting medical care and are sort of “in the system,” so to speak.

167174_PHOTO3_web.jpg


But it’s not enough to simply identify these folks and see who ends up with more autoimmune disease. The authors used propensity score matching to pair individuals infected with COVID with individuals from the control groups who were very similar to them. I’ve talked about this strategy before, but the basic idea is that you build a model predicting the likelihood of infection with COVID, based on a slew of factors — and the slew these authors used is pretty big, as shown below — and then stick people with similar risk for COVID together, with one member of the pair having had COVID and the other having eluded it (at least for the study period).

167174_PHOTO4_web.jpg


After this statistical balancing, the authors looked at the risk for a variety of autoimmune diseases.

Compared with those infected with flu, those infected with COVID were more likely to be diagnosed with any autoimmune condition, connective tissue disease, and, in Japan at least, inflammatory arthritis.

167174_PHOTO5_web.jpg


The authors acknowledge that being diagnosed with a disease might not be the same as actually having the disease, so in another analysis they looked only at people who received treatment for the autoimmune conditions, and the signals were even stronger in that group.

167174_PHOTO6_web.jpg


This risk seemed to be highest in the 6 months following the COVID infection, which makes sense biologically if we think that the infection is somehow screwing up the immune system.

167174_PHOTO7_web.jpg


And the risk was similar with both COVID variants circulating at the time of the study.

The only factor that reduced the risk? You guessed it: vaccination. This is a particularly interesting finding because the exposure cohort was defined by having been infected with COVID. Therefore, the mechanism of protection is not prevention of infection; it’s something else. Perhaps vaccination helps to get the immune system in a state to respond to COVID infection more… appropriately?

167174_PHOTO8_web.jpg


Yes, this study is observational. We can’t draw causal conclusions here. But it does reinforce my long-held belief that COVID is a weird virus, one with effects that are different from the respiratory viruses we are used to. I can’t say for certain whether COVID causes immune system dysfunction that puts someone at risk for autoimmunity — not from this study. But I can say it wouldn’t surprise me.

Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>167174</fileName> <TBEID>0C04EDF2.SIG</TBEID> <TBUniqueIdentifier>MD_0C04EDF2</TBUniqueIdentifier> <newsOrJournal>News</newsOrJournal> <publisherName>Frontline Medical Communications</publisherName> <storyname/> <articleType>353</articleType> <TBLocation>QC Done-All Pubs</TBLocation> <QCDate>20240306T151201</QCDate> <firstPublished>20240306T151839</firstPublished> <LastPublished>20240306T151839</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20240306T151839</CMSDate> <articleSource/> <facebookInfo/> <meetingNumber/> <byline>F Perry Wilson</byline> <bylineText>F. PERRY WILSON, MD, MSCE</bylineText> <bylineFull>F. PERRY WILSON, MD, MSCE</bylineFull> <bylineTitleText/> <USOrGlobal/> <wireDocType/> <newsDocType/> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:imng"> <name>IMNG Medical Media</name> <rightsInfo> <copyrightHolder> <name>Frontline Medical News</name> </copyrightHolder> <copyrightNotice>Copyright (c) 2015 Frontline Medical News, a Frontline Medical Communications Inc. company. All rights reserved. This material may not be published, broadcast, copied, or otherwise reproduced or distributed without the prior written permission of Frontline Medical Communications Inc.</copyrightNotice> </rightsInfo> </provider> <abstract/> <metaDescription>This transcript has been edited for clarity.Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale </metaDescription> <articlePDF/> <teaserImage>300499</teaserImage> <teaser>COVID may cause immune system dysfunction that puts patients at risk for autoimmunity.</teaser> <title>COVID-19 Is a Very Weird Virus</title> <deck/> <disclaimer/> <AuthorList/> <articleURL/> <doi/> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear/> <pubPubdateMonth/> <pubPubdateDay/> <pubVolume/> <pubNumber/> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>icymicov</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>fp</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>idprac</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>im</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>mdemed</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> <journalTitle/> <journalFullTitle/> <copyrightStatement/> </publicationData> </publications_g> <publications> <term>69586</term> <term>15</term> <term>20</term> <term canonical="true">21</term> <term>58877</term> </publications> <sections> <term>71396</term> <term canonical="true">52</term> </sections> <topics> <term canonical="true">63993</term> <term>234</term> <term>284</term> <term>255</term> </topics> <links> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/240126e6.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Centers for Disease Control and Prevention</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/240126e7.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Worldometer</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/240126e8.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Annals of Internal Medicine</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/240126e9.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Dr. Wilson</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/240126ea.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Dr. Wilson</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/240126eb.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Dr. Wilson</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/240126ec.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Dr. Wilson</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/240126ed.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Dr. WIlson</description> </link> </links> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>COVID-19 Is a Very Weird Virus</title> <deck/> </itemMeta> <itemContent> <p><em>This transcript has been edited for clarity</em>.<br/><br/>Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.<br/><br/>In the early days of the pandemic, before we really understood what COVID was, two specialties in the hospital had a foreboding sense that something was very strange about this virus. The first was the pulmonologists, who noticed the striking levels of hypoxemia — low oxygen in the blood — and the rapidity with which patients who had previously been stable would crash in the intensive care unit.<br/><br/>The second, and I mark myself among this group, were the nephrologists. The dialysis machines stopped working right. I remember rounding on patients in the hospital who were on dialysis for kidney failure in the setting of severe COVID infection and seeing clots forming on the dialysis filters. Some patients could barely get in a full treatment because the filters would clog so quickly.<br/><br/>We knew it was worse than flu because of the mortality rates, but these oddities made us realize that it was different too — not just a particularly nasty respiratory virus but one that had effects on the body that we hadn’t really seen before.<br/><br/>[[{"fid":"300499","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"Percentage of US deaths among 65 and older, flu vs. COVID 19","field_file_image_credit[und][0][value]":"Centers for Disease Control and Prevention","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>That’s why I’ve always been interested in studies that compare what happens to patients after COVID infection vs what happens to patients after other <span class="Hyperlink"><a href="https://emedicine.medscape.com/article/302460-overview">respiratory infections</a></span>. This week, we’ll look at an intriguing study that suggests that COVID may lead to autoimmune diseases like <span class="Hyperlink"><a href="https://emedicine.medscape.com/article/331715-overview">rheumatoid arthritis</a></span>, lupus, and <span class="Hyperlink"><a href="https://emedicine.medscape.com/article/1008239-overview">vasculitis</a></span>.<br/><br/>The <span class="Hyperlink"><a href="https://doi.org/10.7326/M23-1831">study</a></span> appears in the Annals of Internal Medicine and is made possible by the universal electronic health record systems of South Korea and Japan, who collaborated to create a truly staggering cohort of more than 20 million individuals living in those countries from 2020 to 2021.<br/><br/>The exposure of interest? COVID infection, experienced by just under 5% of that cohort over the study period. (Remember, there was a time when COVID infections were relatively controlled, particularly in some countries.)<br/><br/>[[{"fid":"300506","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"Total coronavirus cases in South Korea","field_file_image_credit[und][0][value]":"Worldometer","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>The researchers wanted to compare the risk for autoimmune disease among COVID-infected individuals against two control groups. The first control group was the general population. This is interesting but a difficult analysis, because people who become infected with COVID might be very different from the general population. The second control group was people infected with <span class="Hyperlink"><a href="https://emedicine.medscape.com/article/219557-overview">influenza</a></span>. I like this a lot better; the risk factors for COVID and influenza are quite similar, and the fact that this group was diagnosed with flu means at least that they are getting medical care and are sort of “in the system,” so to speak.<br/><br/>[[{"fid":"300507","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"Population","field_file_image_credit[und][0][value]":"Annals of Internal Medicine","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>But it’s not enough to simply identify these folks and see who ends up with more autoimmune disease. The authors used propensity score matching to pair individuals infected with COVID with individuals from the control groups who were very similar to them. <span class="Hyperlink"><a href="https://www.medscape.com/viewarticle/993373">I’ve talked about this strategy before</a></span>, but the basic idea is that you build a model predicting the likelihood of infection with COVID, based on a slew of factors — and the slew these authors used is pretty big, as shown below — and then stick people with similar risk for COVID together, with one member of the pair having had COVID and the other having eluded it (at least for the study period).<br/><br/>[[{"fid":"300508","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"Propensity-score matching","field_file_image_credit[und][0][value]":"Dr. Wilson","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>After this statistical balancing, the authors looked at the risk for a variety of autoimmune diseases.<br/><br/>Compared with those infected with flu, those infected with COVID were more likely to be diagnosed with any autoimmune condition, connective tissue disease, and, in Japan at least, inflammatory arthritis.<br/><br/>[[{"fid":"300509","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"Bar chart","field_file_image_credit[und][0][value]":"Dr. Wilson","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>The authors acknowledge that being diagnosed with a disease might not be the same as actually having the disease, so in another analysis they looked only at people who received treatment for the autoimmune conditions, and the signals were even stronger in that group.<br/><br/>[[{"fid":"300510","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"Bar chart","field_file_image_credit[und][0][value]":"Dr. Wilson","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>This risk seemed to be highest in the 6 months following the COVID infection, which makes sense biologically if we think that the infection is somehow screwing up the immune system.<br/><br/>[[{"fid":"300511","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"bar chart","field_file_image_credit[und][0][value]":"Dr. Wilson","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>And the risk was similar with both COVID variants circulating at the time of the study.<br/><br/>The only factor that reduced the risk? You guessed it: vaccination. This is a particularly interesting finding because the exposure cohort was defined by having been infected with COVID. Therefore, the mechanism of protection is not prevention of infection; it’s something else. Perhaps vaccination helps to get the immune system in a state to respond to COVID infection more… appropriately?<br/><br/>[[{"fid":"300512","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"Bar chart","field_file_image_credit[und][0][value]":"Dr. WIlson","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>Yes, this study is observational. We can’t draw causal conclusions here. But it does reinforce my long-held belief that COVID is a weird virus, one with effects that are different from the respiratory viruses we are used to. I can’t say for certain whether COVID causes immune system dysfunction that puts someone at risk for autoimmunity — not from this study. But I can say it wouldn’t surprise me.<span class="end"/></p> <p> <em>Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships. </em> </p> <p> <em>A version of this article appeared on <span class="Hyperlink"><a href="https://www.medscape.com/viewarticle/1000302">Medscape.com</a></span>.</em> </p> </itemContent> </newsItem> <newsItem> <itemMeta> <itemRole>teaser</itemRole> <itemClass>text</itemClass> <title/> <deck/> </itemMeta> <itemContent> </itemContent> </newsItem> </itemSet></root>
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

It Sure Looks Like Cannabis Is Bad for the Heart, Doesn’t It?

Article Type
Changed
Tue, 03/12/2024 - 17:24

This transcript has been edited for clarity.

If you’re an epidemiologist trying to explore whether some exposure is a risk factor for a disease, you can run into a tough problem when your exposure of interest is highly correlated with another risk factor for the disease. For decades, this stymied investigations into the link, if any, between marijuana use and cardiovascular disease because, for decades, most people who used marijuana in some way also smoked cigarettes — which is a very clear risk factor for heart disease.
 

But the times they are a-changing.

Thanks to the legalization of marijuana for recreational use in many states, and even broader social trends, there is now a large population of people who use marijuana but do not use cigarettes. That means we can start to determine whether marijuana use is an independent risk factor for heart disease.

And this week, we have the largest study yet to attempt to answer that question, though, as I’ll explain momentarily, the smoke hasn’t entirely cleared yet.

The centerpiece of the study we are discussing this week, “Association of Cannabis Use With Cardiovascular Outcomes Among US Adults,” which appeared in the Journal of the American Heart Association, is the Behavioral Risk Factor Surveillance System, an annual telephone survey conducted by the Centers for Disease Control and Prevention since 1984 that gathers data on all sorts of stuff that we do to ourselves: our drinking habits, our smoking habits, and, more recently, our marijuana habits.

The paper combines annual data from 2016 to 2020 representing 27 states and two US territories for a total sample size of more than 430,000 individuals. The key exposure? Marijuana use, which was coded as the number of days of marijuana use in the past 30 days. The key outcome? Coronary heart disease, collected through questions such as “Has a doctor, nurse, or other health professional ever told you that you had a heart attack?”

Right away you might detect a couple of problems here. But let me show you the results before we worry about what they mean.

You can see the rates of the major cardiovascular outcomes here, stratified by daily use of marijuana, nondaily use, and no use. Broadly speaking, the risk was highest for daily users, lowest for occasional users, and in the middle for non-users.

167103_photo1.jpg


Of course, non-users and users are different in lots of other ways; non-users were quite a bit older, for example. Adjusting for all those factors showed that, independent of age, smoking status, the presence of diabetes, and so on, there was an independently increased risk for cardiovascular outcomes in people who used marijuana.

167103_photo2.jpg


Importantly, 60% of people in this study were never smokers, and the results in that group looked pretty similar to the results overall.

167103_photo3.jpg


But I said there were a couple of problems, so let’s dig into those a bit.

First, like most survey studies, this one requires honest and accurate reporting from its subjects. There was no verification of heart disease using electronic health records or of marijuana usage based on biosamples. Broadly, miscategorization of exposure and outcomes in surveys tends to bias the results toward the null hypothesis, toward concluding that there is no link between exposure and outcome, so perhaps this is okay.

The bigger problem is the fact that this is a cross-sectional design. If you really wanted to know whether marijuana led to heart disease, you’d do a longitudinal study following users and non-users for some number of decades and see who developed heart disease and who didn’t. (For the pedants out there, I suppose you’d actually want to randomize people to use marijuana or not and then see who had a heart attack, but the IRB keeps rejecting my protocol when I submit it.)

Here, though, we literally can’t tell whether people who use marijuana have more heart attacks or whether people who have heart attacks use more marijuana. The authors argue that there are no data that show that people are more likely to use marijuana after a heart attack or stroke, but at the time the survey was conducted, they had already had their heart attack or stroke.

The authors also imply that they found a dose-response relationship between marijuana use and these cardiovascular outcomes. This is an important statement because dose response is one factor that we use to determine whether a risk factor may actually be causative as opposed to just correlative.

167103_photo4.jpg


But I take issue with the dose-response language here. The model used to make these graphs classifies marijuana use as a single continuous variable ranging from 0 (no days of use in the past 30 days) to 1 (30 days of use in the past 30 days). The model is thus constrained to monotonically increase or decrease with respect to the outcome. To prove a dose response, you have to give the model the option to find something that isn’t a dose response — for example, by classifying marijuana use into discrete, independent categories rather than a single continuous number.

Am I arguing here that marijuana use is good for you? Of course not. Nor am I even arguing that it has no effect on the cardiovascular system. There are endocannabinoid receptors all over your vasculature. It is quite plausible that marijuana use — and particularly the smoking of marijuana, which comes with the inhalation of a fair amount of particulate matter — is bad for you. But a cross-sectional survey study, while a good start, is not quite the right way to answer the question. So, while the jury is still out, it’s high time for more research.

Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity.

If you’re an epidemiologist trying to explore whether some exposure is a risk factor for a disease, you can run into a tough problem when your exposure of interest is highly correlated with another risk factor for the disease. For decades, this stymied investigations into the link, if any, between marijuana use and cardiovascular disease because, for decades, most people who used marijuana in some way also smoked cigarettes — which is a very clear risk factor for heart disease.
 

But the times they are a-changing.

Thanks to the legalization of marijuana for recreational use in many states, and even broader social trends, there is now a large population of people who use marijuana but do not use cigarettes. That means we can start to determine whether marijuana use is an independent risk factor for heart disease.

And this week, we have the largest study yet to attempt to answer that question, though, as I’ll explain momentarily, the smoke hasn’t entirely cleared yet.

The centerpiece of the study we are discussing this week, “Association of Cannabis Use With Cardiovascular Outcomes Among US Adults,” which appeared in the Journal of the American Heart Association, is the Behavioral Risk Factor Surveillance System, an annual telephone survey conducted by the Centers for Disease Control and Prevention since 1984 that gathers data on all sorts of stuff that we do to ourselves: our drinking habits, our smoking habits, and, more recently, our marijuana habits.

The paper combines annual data from 2016 to 2020 representing 27 states and two US territories for a total sample size of more than 430,000 individuals. The key exposure? Marijuana use, which was coded as the number of days of marijuana use in the past 30 days. The key outcome? Coronary heart disease, collected through questions such as “Has a doctor, nurse, or other health professional ever told you that you had a heart attack?”

Right away you might detect a couple of problems here. But let me show you the results before we worry about what they mean.

You can see the rates of the major cardiovascular outcomes here, stratified by daily use of marijuana, nondaily use, and no use. Broadly speaking, the risk was highest for daily users, lowest for occasional users, and in the middle for non-users.

167103_photo1.jpg


Of course, non-users and users are different in lots of other ways; non-users were quite a bit older, for example. Adjusting for all those factors showed that, independent of age, smoking status, the presence of diabetes, and so on, there was an independently increased risk for cardiovascular outcomes in people who used marijuana.

167103_photo2.jpg


Importantly, 60% of people in this study were never smokers, and the results in that group looked pretty similar to the results overall.

167103_photo3.jpg


But I said there were a couple of problems, so let’s dig into those a bit.

First, like most survey studies, this one requires honest and accurate reporting from its subjects. There was no verification of heart disease using electronic health records or of marijuana usage based on biosamples. Broadly, miscategorization of exposure and outcomes in surveys tends to bias the results toward the null hypothesis, toward concluding that there is no link between exposure and outcome, so perhaps this is okay.

The bigger problem is the fact that this is a cross-sectional design. If you really wanted to know whether marijuana led to heart disease, you’d do a longitudinal study following users and non-users for some number of decades and see who developed heart disease and who didn’t. (For the pedants out there, I suppose you’d actually want to randomize people to use marijuana or not and then see who had a heart attack, but the IRB keeps rejecting my protocol when I submit it.)

Here, though, we literally can’t tell whether people who use marijuana have more heart attacks or whether people who have heart attacks use more marijuana. The authors argue that there are no data that show that people are more likely to use marijuana after a heart attack or stroke, but at the time the survey was conducted, they had already had their heart attack or stroke.

The authors also imply that they found a dose-response relationship between marijuana use and these cardiovascular outcomes. This is an important statement because dose response is one factor that we use to determine whether a risk factor may actually be causative as opposed to just correlative.

167103_photo4.jpg


But I take issue with the dose-response language here. The model used to make these graphs classifies marijuana use as a single continuous variable ranging from 0 (no days of use in the past 30 days) to 1 (30 days of use in the past 30 days). The model is thus constrained to monotonically increase or decrease with respect to the outcome. To prove a dose response, you have to give the model the option to find something that isn’t a dose response — for example, by classifying marijuana use into discrete, independent categories rather than a single continuous number.

Am I arguing here that marijuana use is good for you? Of course not. Nor am I even arguing that it has no effect on the cardiovascular system. There are endocannabinoid receptors all over your vasculature. It is quite plausible that marijuana use — and particularly the smoking of marijuana, which comes with the inhalation of a fair amount of particulate matter — is bad for you. But a cross-sectional survey study, while a good start, is not quite the right way to answer the question. So, while the jury is still out, it’s high time for more research.

Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity.

If you’re an epidemiologist trying to explore whether some exposure is a risk factor for a disease, you can run into a tough problem when your exposure of interest is highly correlated with another risk factor for the disease. For decades, this stymied investigations into the link, if any, between marijuana use and cardiovascular disease because, for decades, most people who used marijuana in some way also smoked cigarettes — which is a very clear risk factor for heart disease.
 

But the times they are a-changing.

Thanks to the legalization of marijuana for recreational use in many states, and even broader social trends, there is now a large population of people who use marijuana but do not use cigarettes. That means we can start to determine whether marijuana use is an independent risk factor for heart disease.

And this week, we have the largest study yet to attempt to answer that question, though, as I’ll explain momentarily, the smoke hasn’t entirely cleared yet.

The centerpiece of the study we are discussing this week, “Association of Cannabis Use With Cardiovascular Outcomes Among US Adults,” which appeared in the Journal of the American Heart Association, is the Behavioral Risk Factor Surveillance System, an annual telephone survey conducted by the Centers for Disease Control and Prevention since 1984 that gathers data on all sorts of stuff that we do to ourselves: our drinking habits, our smoking habits, and, more recently, our marijuana habits.

The paper combines annual data from 2016 to 2020 representing 27 states and two US territories for a total sample size of more than 430,000 individuals. The key exposure? Marijuana use, which was coded as the number of days of marijuana use in the past 30 days. The key outcome? Coronary heart disease, collected through questions such as “Has a doctor, nurse, or other health professional ever told you that you had a heart attack?”

Right away you might detect a couple of problems here. But let me show you the results before we worry about what they mean.

You can see the rates of the major cardiovascular outcomes here, stratified by daily use of marijuana, nondaily use, and no use. Broadly speaking, the risk was highest for daily users, lowest for occasional users, and in the middle for non-users.

167103_photo1.jpg


Of course, non-users and users are different in lots of other ways; non-users were quite a bit older, for example. Adjusting for all those factors showed that, independent of age, smoking status, the presence of diabetes, and so on, there was an independently increased risk for cardiovascular outcomes in people who used marijuana.

167103_photo2.jpg


Importantly, 60% of people in this study were never smokers, and the results in that group looked pretty similar to the results overall.

167103_photo3.jpg


But I said there were a couple of problems, so let’s dig into those a bit.

First, like most survey studies, this one requires honest and accurate reporting from its subjects. There was no verification of heart disease using electronic health records or of marijuana usage based on biosamples. Broadly, miscategorization of exposure and outcomes in surveys tends to bias the results toward the null hypothesis, toward concluding that there is no link between exposure and outcome, so perhaps this is okay.

The bigger problem is the fact that this is a cross-sectional design. If you really wanted to know whether marijuana led to heart disease, you’d do a longitudinal study following users and non-users for some number of decades and see who developed heart disease and who didn’t. (For the pedants out there, I suppose you’d actually want to randomize people to use marijuana or not and then see who had a heart attack, but the IRB keeps rejecting my protocol when I submit it.)

Here, though, we literally can’t tell whether people who use marijuana have more heart attacks or whether people who have heart attacks use more marijuana. The authors argue that there are no data that show that people are more likely to use marijuana after a heart attack or stroke, but at the time the survey was conducted, they had already had their heart attack or stroke.

The authors also imply that they found a dose-response relationship between marijuana use and these cardiovascular outcomes. This is an important statement because dose response is one factor that we use to determine whether a risk factor may actually be causative as opposed to just correlative.

167103_photo4.jpg


But I take issue with the dose-response language here. The model used to make these graphs classifies marijuana use as a single continuous variable ranging from 0 (no days of use in the past 30 days) to 1 (30 days of use in the past 30 days). The model is thus constrained to monotonically increase or decrease with respect to the outcome. To prove a dose response, you have to give the model the option to find something that isn’t a dose response — for example, by classifying marijuana use into discrete, independent categories rather than a single continuous number.

Am I arguing here that marijuana use is good for you? Of course not. Nor am I even arguing that it has no effect on the cardiovascular system. There are endocannabinoid receptors all over your vasculature. It is quite plausible that marijuana use — and particularly the smoking of marijuana, which comes with the inhalation of a fair amount of particulate matter — is bad for you. But a cross-sectional survey study, while a good start, is not quite the right way to answer the question. So, while the jury is still out, it’s high time for more research.

Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>167103</fileName> <TBEID>0C04EC8A.SIG</TBEID> <TBUniqueIdentifier>MD_0C04EC8A</TBUniqueIdentifier> <newsOrJournal>News</newsOrJournal> <publisherName>Frontline Medical Communications</publisherName> <storyname/> <articleType>2</articleType> <TBLocation>QC Done-All Pubs</TBLocation> <QCDate>20240228T144527</QCDate> <firstPublished>20240228T150704</firstPublished> <LastPublished>20240228T150704</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20240228T150704</CMSDate> <articleSource/> <facebookInfo/> <meetingNumber/> <byline>F Perry Wilson</byline> <bylineText/> <bylineFull/> <bylineTitleText/> <USOrGlobal/> <wireDocType/> <newsDocType>News</newsDocType> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:imng"> <name>IMNG Medical Media</name> <rightsInfo> <copyrightHolder> <name>Frontline Medical News</name> </copyrightHolder> <copyrightNotice>Copyright (c) 2015 Frontline Medical News, a Frontline Medical Communications Inc. company. All rights reserved. This material may not be published, broadcast, copied, or otherwise reproduced or distributed without the prior written permission of Frontline Medical Communications Inc.</copyrightNotice> </rightsInfo> </provider> <abstract/> <metaDescription>It is quite plausible that marijuana use — and particularly the smoking of marijuana, which comes with the inhalation of a fair amount of particulate matter — i</metaDescription> <articlePDF/> <teaserImage>300387</teaserImage> <teaser>Dr. Wilson discusses recent research on cannabis and heart disease.</teaser> <title>COMMENTARY It Sure Looks Like Cannabis Is Bad for the Heart, Doesn’t It?</title> <deck/> <disclaimer/> <AuthorList/> <articleURL/> <doi/> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear/> <pubPubdateMonth/> <pubPubdateDay/> <pubVolume/> <pubNumber/> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>card</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>fp</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>im</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> </publications_g> <publications> <term canonical="true">5</term> <term>15</term> <term>21</term> </publications> <sections> <term canonical="true">52</term> </sections> <topics> <term canonical="true">280</term> </topics> <links> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/240126aa.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Courtesy Dr. Wilson</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/240126ab.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Courtesy Dr. Wilson</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/240126ac.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Courtesy Dr. Wilson</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/240126ad.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">JAMA</description> </link> </links> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>COMMENTARY It Sure Looks Like Cannabis Is Bad for the Heart, Doesn’t It?</title> <deck/> </itemMeta> <itemContent> <p>BY F. PERRY WILSON, MD, MSCE<br/><br/><br/><br/><em>This transcript has been edited for clarity</em>.<br/><br/>If you’re an epidemiologist trying to explore whether some exposure is a risk factor for a disease, you can run into a tough problem when your exposure of interest is highly correlated with another risk factor for the disease. For decades, this stymied investigations into the link, if any, between <span class="Hyperlink"><a href="https://reference.medscape.com/drug/cannabis-ganja-marijuana-343687">marijuana</a></span> use and cardiovascular disease because, for decades, most people who used marijuana in some way also smoked cigarettes — which is a very clear risk factor for heart disease.<br/><br/></p> <h2>But the times they are a-changing.</h2> <p>Thanks to the legalization of marijuana for recreational use in many states, and even broader social trends, there is now a large population of people who use marijuana but do not use cigarettes. That means we can start to determine whether marijuana use is an independent risk factor for heart disease.<br/><br/>And this week, we have the largest study yet to attempt to answer that question, though, as I’ll explain momentarily, the smoke hasn’t entirely cleared yet.<br/><br/>The centerpiece of the study we are discussing this week, <span class="Hyperlink"><a href="https://www.ahajournals.org/doi/10.1161/JAHA.123.030178">“Association of Cannabis Use With Cardiovascular Outcomes Among US Adults,”</a></span> which appeared in the Journal of the American Heart Association, is the <span class="Hyperlink"><a href="https://www.cdc.gov/brfss/index.html">Behavioral Risk Factor Surveillance System</a></span>, an annual telephone survey conducted by the Centers for Disease Control and Prevention since 1984 that gathers data on all sorts of stuff that we do to ourselves: our drinking habits, our smoking habits, and, more recently, our marijuana habits.<br/><br/>The paper combines annual data from 2016 to 2020 representing 27 states and two US territories for a total sample size of more than 430,000 individuals. The key exposure? Marijuana use, which was coded as the number of days of marijuana use in the past 30 days. The key outcome? <span class="Hyperlink"><a href="https://emedicine.medscape.com/article/349040-overview">Coronary heart disease</a></span>, collected through questions such as “Has a doctor, nurse, or other health professional ever told you that you had a heart attack?”<br/><br/>Right away you might detect a couple of problems here. But let me show you the results before we worry about what they mean.<br/><br/>You can see the rates of the major cardiovascular outcomes here, stratified by daily use of marijuana, nondaily use, and no use. Broadly speaking, the risk was highest for daily users, lowest for occasional users, and in the middle for non-users.<br/><br/>[[{"fid":"300387","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"Courtesy Dr. Wilson","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>Of course, non-users and users are different in lots of other ways; non-users were quite a bit older, for example. Adjusting for all those factors showed that, independent of age, smoking status, the presence of diabetes, and so on, there was an independently increased risk for cardiovascular outcomes in people who used marijuana.<br/><br/>[[{"fid":"300388","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"Courtesy Dr. Wilson","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>Importantly, 60% of people in this study were never smokers, and the results in that group looked pretty similar to the results overall.<br/><br/>[[{"fid":"300389","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"Courtesy Dr. Wilson","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>But I said there were a couple of problems, so let’s dig into those a bit.<br/><br/>First, like most survey studies, this one requires honest and accurate reporting from its subjects. There was no verification of heart disease using electronic health records or of marijuana usage based on biosamples. Broadly, miscategorization of exposure and outcomes in surveys tends to bias the results toward the null hypothesis, toward concluding that there is no link between exposure and outcome, so perhaps this is okay.<br/><br/>The bigger problem is the fact that this is a cross-sectional design. If you really wanted to know whether marijuana led to heart disease, you’d do a longitudinal study following users and non-users for some number of decades and see who developed heart disease and who didn’t. (For the pedants out there, I suppose you’d actually want to randomize people to use marijuana or not and then see who had a heart attack, but the IRB keeps rejecting my protocol when I submit it.)<br/><br/>Here, though, we literally can’t tell whether people who use marijuana have more heart attacks or whether people who have heart attacks use more marijuana. The authors argue that there are no data that show that people are more likely to use marijuana after a heart attack or <span class="Hyperlink"><a href="https://emedicine.medscape.com/article/1916852-overview">stroke</a></span>, but at the time the survey was conducted, they had already had their heart attack or stroke.<br/><br/>The authors also imply that they found a dose-response relationship between marijuana use and these cardiovascular outcomes. This is an important statement because dose response is one factor that we use to determine whether a risk factor may actually be causative as opposed to just correlative.<br/><br/>[[{"fid":"300390","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"JAMA","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>But I take issue with the dose-response language here. The model used to make these graphs classifies marijuana use as a single continuous variable ranging from 0 (no days of use in the past 30 days) to 1 (30 days of use in the past 30 days). The model is thus constrained to monotonically increase or decrease with respect to the outcome. To prove a dose response, you have to give the model the option to find something that isn’t a dose response — for example, by classifying marijuana use into discrete, independent categories rather than a single continuous number.<br/><br/>Am I arguing here that marijuana use is good for you? Of course not. Nor am I even arguing that it has no effect on the cardiovascular system. There are endocannabinoid receptors all over your vasculature. <span class="tag metaDescription">It is quite plausible that marijuana use — and particularly the smoking of marijuana, which comes with the inhalation of a fair amount of particulate matter — is bad for you.</span> But a cross-sectional survey study, while a good start, is not quite the right way to answer the question. So, while the jury is still out, it’s high time for more research.<span class="end"/></p> <p> <em>Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.</em> </p> <p> <em>A version of this article appeared on <span class="Hyperlink"><a href="https://www.medscape.com/viewarticle/1000250">Medscape.com</a></span>.</em> </p> </itemContent> </newsItem> <newsItem> <itemMeta> <itemRole>teaser</itemRole> <itemClass>text</itemClass> <title/> <deck/> </itemMeta> <itemContent> </itemContent> </newsItem> </itemSet></root>
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Bivalent Vaccines Protect Even Children Who’ve Had COVID

Article Type
Changed
Tue, 02/13/2024 - 15:49

 



This transcript has been edited for clarity.

It was only 3 years ago when we called the pathogen we now refer to as the coronavirus “nCOV-19.” It was, in many ways, more descriptive than what we have today. The little “n” there stood for “novel” — and it was really that little “n” that caused us all the trouble.

You see, coronaviruses themselves were not really new to us. Understudied, perhaps, but with four strains running around the globe at any time giving rise to the common cold, these were viruses our bodies understood.

But the coronavirus discovered in 2019 was novel — not just to the world, but to our own immune systems. It was different enough from its circulating relatives that our immune memory cells failed to recognize it. Instead of acting like a cold, it acted like nothing we had seen before, at least in our lifetime. The story of the pandemic is very much a bildungsroman of our immune systems — a story of how our immunity grew up.

The difference between the start of 2020 and now, when infections with the coronavirus remain common but not as deadly, can be measured in terms of immune education. Some of our immune systems were educated by infection, some by vaccination, and many by both.

When the first vaccines emerged in December 2020, the opportunity to educate our immune systems was still huge. Though, at the time, an estimated 20 million had been infected in the US and 350,000 had died, there was a large population that remained immunologically naive. I was one of them.

If 2020 into early 2021 was the era of immune education, the postvaccine period was the era of the variant. From one COVID strain to two, to five, to innumerable, our immune memory — trained on a specific version of the virus or its spike protein — became imperfect again. Not naive; these variants were not “novel” in the way COVID-19 was novel, but they were different. And different enough to cause infection.

Following the playbook of another virus that loves to come dressed up in different outfits, the flu virus, we find ourselves in the booster era — a world where yearly doses of a vaccine, ideally matched to the variants circulating when the vaccine is given, are the recommendation if not the norm.

But questions remain about the vaccination program, particularly around who should get it. And two populations with big question marks over their heads are (1) people who have already been infected and (2) kids, because their risk for bad outcomes is so much lower.

This week, we finally have some evidence that can shed light on these questions. The study under the spotlight is this one, appearing in JAMA, which tries to analyze the ability of the bivalent vaccine — that’s the second one to come out, around September  2022 — to protect kids from COVID-19.

Now, right off the bat, this was not a randomized trial. The studies that established the viability of the mRNA vaccine platform were; they happened before the vaccine was authorized. But trials of the bivalent vaccine were mostly limited to proving immune response, not protection from disease.

Nevertheless, with some good observational methods and some statistics, we can try to tease out whether bivalent vaccines in kids worked.

The study combines three prospective cohort studies. The details are in the paper, but what you need to know is that the special sauce of these studies was that the kids were tested for COVID-19 on a weekly basis, whether they had symptoms or not. This is critical because asymptomatic infections can transmit COVID-19.

Let’s do the variables of interest. First and foremost, the bivalent vaccine. Some of these kids got the bivalent vaccine, some didn’t. Other key variables include prior vaccination with the monovalent vaccine. Some had been vaccinated with the monovalent vaccine before, some hadn’t. And, of course, prior infection. Some had been infected before (based on either nasal swabs or blood tests).

Let’s focus first on the primary exposure of interest: getting that bivalent vaccine. Again, this was not randomly assigned; kids who got the bivalent vaccine were different from those who did not. In general, they lived in smaller households, they were more likely to be White, less likely to have had a prior COVID infection, and quite a bit more likely to have at least one chronic condition.

166859_graphic1_web.jpg


To me, this constellation of factors describes a slightly higher-risk group; it makes sense that they were more likely to get the second vaccine.

Given those factors, what were the rates of COVID infection? After nearly a year of follow-up, around 15% of the kids who hadn’t received the bivalent vaccine got infected compared with 5% of the vaccinated kids. Symptomatic infections represented roughly half of all infections in both groups.

166859_graphic2_web.jpg


After adjustment for factors that differed between the groups, this difference translated into a vaccine efficacy of about 50% in this population. That’s our first data point. Yes, the bivalent vaccine worked. Not amazingly, of course. But it worked.

What about the kids who had had a prior COVID infection? Somewhat surprisingly, the vaccine was just as effective in this population, despite the fact that their immune systems already had some knowledge of COVID. Ten percent of unvaccinated kids got infected, even though they had been infected before. Just 2.5% of kids who received the bivalent vaccine got infected, suggesting some synergy between prior infection and vaccination.

166859_graphic3_web.jpg


These data suggest that the bivalent vaccine did reduce the risk for COVID infection in kids. All good. But the piece still missing is how severe these infections were. It doesn’t appear that any of the 426 infections documented in this study resulted in hospitalization or death, fortunately. And no data are presented on the incidence of multisystem inflammatory syndrome of children, though given the rarity, I’d be surprised if any of these kids have this either.

So where are we? Well, it seems that the narrative out there that says “the vaccines don’t work” or “the vaccines don’t work if you’ve already been infected” is probably not true. They do work. This study and others in adults show that. If they work to reduce infections, as this study shows, they will also work to reduce deaths. It’s just that death is fortunately so rare in children that the number needed to vaccinate to prevent one death is very large. In that situation, the decision to vaccinate comes down to the risks associated with vaccination. So far, those risk seem very minimal.

Perhaps falling into a flu-like yearly vaccination schedule is not simply the result of old habits dying hard. Maybe it’s actually not a bad idea.
 

Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 



This transcript has been edited for clarity.

It was only 3 years ago when we called the pathogen we now refer to as the coronavirus “nCOV-19.” It was, in many ways, more descriptive than what we have today. The little “n” there stood for “novel” — and it was really that little “n” that caused us all the trouble.

You see, coronaviruses themselves were not really new to us. Understudied, perhaps, but with four strains running around the globe at any time giving rise to the common cold, these were viruses our bodies understood.

But the coronavirus discovered in 2019 was novel — not just to the world, but to our own immune systems. It was different enough from its circulating relatives that our immune memory cells failed to recognize it. Instead of acting like a cold, it acted like nothing we had seen before, at least in our lifetime. The story of the pandemic is very much a bildungsroman of our immune systems — a story of how our immunity grew up.

The difference between the start of 2020 and now, when infections with the coronavirus remain common but not as deadly, can be measured in terms of immune education. Some of our immune systems were educated by infection, some by vaccination, and many by both.

When the first vaccines emerged in December 2020, the opportunity to educate our immune systems was still huge. Though, at the time, an estimated 20 million had been infected in the US and 350,000 had died, there was a large population that remained immunologically naive. I was one of them.

If 2020 into early 2021 was the era of immune education, the postvaccine period was the era of the variant. From one COVID strain to two, to five, to innumerable, our immune memory — trained on a specific version of the virus or its spike protein — became imperfect again. Not naive; these variants were not “novel” in the way COVID-19 was novel, but they were different. And different enough to cause infection.

Following the playbook of another virus that loves to come dressed up in different outfits, the flu virus, we find ourselves in the booster era — a world where yearly doses of a vaccine, ideally matched to the variants circulating when the vaccine is given, are the recommendation if not the norm.

But questions remain about the vaccination program, particularly around who should get it. And two populations with big question marks over their heads are (1) people who have already been infected and (2) kids, because their risk for bad outcomes is so much lower.

This week, we finally have some evidence that can shed light on these questions. The study under the spotlight is this one, appearing in JAMA, which tries to analyze the ability of the bivalent vaccine — that’s the second one to come out, around September  2022 — to protect kids from COVID-19.

Now, right off the bat, this was not a randomized trial. The studies that established the viability of the mRNA vaccine platform were; they happened before the vaccine was authorized. But trials of the bivalent vaccine were mostly limited to proving immune response, not protection from disease.

Nevertheless, with some good observational methods and some statistics, we can try to tease out whether bivalent vaccines in kids worked.

The study combines three prospective cohort studies. The details are in the paper, but what you need to know is that the special sauce of these studies was that the kids were tested for COVID-19 on a weekly basis, whether they had symptoms or not. This is critical because asymptomatic infections can transmit COVID-19.

Let’s do the variables of interest. First and foremost, the bivalent vaccine. Some of these kids got the bivalent vaccine, some didn’t. Other key variables include prior vaccination with the monovalent vaccine. Some had been vaccinated with the monovalent vaccine before, some hadn’t. And, of course, prior infection. Some had been infected before (based on either nasal swabs or blood tests).

Let’s focus first on the primary exposure of interest: getting that bivalent vaccine. Again, this was not randomly assigned; kids who got the bivalent vaccine were different from those who did not. In general, they lived in smaller households, they were more likely to be White, less likely to have had a prior COVID infection, and quite a bit more likely to have at least one chronic condition.

166859_graphic1_web.jpg


To me, this constellation of factors describes a slightly higher-risk group; it makes sense that they were more likely to get the second vaccine.

Given those factors, what were the rates of COVID infection? After nearly a year of follow-up, around 15% of the kids who hadn’t received the bivalent vaccine got infected compared with 5% of the vaccinated kids. Symptomatic infections represented roughly half of all infections in both groups.

166859_graphic2_web.jpg


After adjustment for factors that differed between the groups, this difference translated into a vaccine efficacy of about 50% in this population. That’s our first data point. Yes, the bivalent vaccine worked. Not amazingly, of course. But it worked.

What about the kids who had had a prior COVID infection? Somewhat surprisingly, the vaccine was just as effective in this population, despite the fact that their immune systems already had some knowledge of COVID. Ten percent of unvaccinated kids got infected, even though they had been infected before. Just 2.5% of kids who received the bivalent vaccine got infected, suggesting some synergy between prior infection and vaccination.

166859_graphic3_web.jpg


These data suggest that the bivalent vaccine did reduce the risk for COVID infection in kids. All good. But the piece still missing is how severe these infections were. It doesn’t appear that any of the 426 infections documented in this study resulted in hospitalization or death, fortunately. And no data are presented on the incidence of multisystem inflammatory syndrome of children, though given the rarity, I’d be surprised if any of these kids have this either.

So where are we? Well, it seems that the narrative out there that says “the vaccines don’t work” or “the vaccines don’t work if you’ve already been infected” is probably not true. They do work. This study and others in adults show that. If they work to reduce infections, as this study shows, they will also work to reduce deaths. It’s just that death is fortunately so rare in children that the number needed to vaccinate to prevent one death is very large. In that situation, the decision to vaccinate comes down to the risks associated with vaccination. So far, those risk seem very minimal.

Perhaps falling into a flu-like yearly vaccination schedule is not simply the result of old habits dying hard. Maybe it’s actually not a bad idea.
 

Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

 



This transcript has been edited for clarity.

It was only 3 years ago when we called the pathogen we now refer to as the coronavirus “nCOV-19.” It was, in many ways, more descriptive than what we have today. The little “n” there stood for “novel” — and it was really that little “n” that caused us all the trouble.

You see, coronaviruses themselves were not really new to us. Understudied, perhaps, but with four strains running around the globe at any time giving rise to the common cold, these were viruses our bodies understood.

But the coronavirus discovered in 2019 was novel — not just to the world, but to our own immune systems. It was different enough from its circulating relatives that our immune memory cells failed to recognize it. Instead of acting like a cold, it acted like nothing we had seen before, at least in our lifetime. The story of the pandemic is very much a bildungsroman of our immune systems — a story of how our immunity grew up.

The difference between the start of 2020 and now, when infections with the coronavirus remain common but not as deadly, can be measured in terms of immune education. Some of our immune systems were educated by infection, some by vaccination, and many by both.

When the first vaccines emerged in December 2020, the opportunity to educate our immune systems was still huge. Though, at the time, an estimated 20 million had been infected in the US and 350,000 had died, there was a large population that remained immunologically naive. I was one of them.

If 2020 into early 2021 was the era of immune education, the postvaccine period was the era of the variant. From one COVID strain to two, to five, to innumerable, our immune memory — trained on a specific version of the virus or its spike protein — became imperfect again. Not naive; these variants were not “novel” in the way COVID-19 was novel, but they were different. And different enough to cause infection.

Following the playbook of another virus that loves to come dressed up in different outfits, the flu virus, we find ourselves in the booster era — a world where yearly doses of a vaccine, ideally matched to the variants circulating when the vaccine is given, are the recommendation if not the norm.

But questions remain about the vaccination program, particularly around who should get it. And two populations with big question marks over their heads are (1) people who have already been infected and (2) kids, because their risk for bad outcomes is so much lower.

This week, we finally have some evidence that can shed light on these questions. The study under the spotlight is this one, appearing in JAMA, which tries to analyze the ability of the bivalent vaccine — that’s the second one to come out, around September  2022 — to protect kids from COVID-19.

Now, right off the bat, this was not a randomized trial. The studies that established the viability of the mRNA vaccine platform were; they happened before the vaccine was authorized. But trials of the bivalent vaccine were mostly limited to proving immune response, not protection from disease.

Nevertheless, with some good observational methods and some statistics, we can try to tease out whether bivalent vaccines in kids worked.

The study combines three prospective cohort studies. The details are in the paper, but what you need to know is that the special sauce of these studies was that the kids were tested for COVID-19 on a weekly basis, whether they had symptoms or not. This is critical because asymptomatic infections can transmit COVID-19.

Let’s do the variables of interest. First and foremost, the bivalent vaccine. Some of these kids got the bivalent vaccine, some didn’t. Other key variables include prior vaccination with the monovalent vaccine. Some had been vaccinated with the monovalent vaccine before, some hadn’t. And, of course, prior infection. Some had been infected before (based on either nasal swabs or blood tests).

Let’s focus first on the primary exposure of interest: getting that bivalent vaccine. Again, this was not randomly assigned; kids who got the bivalent vaccine were different from those who did not. In general, they lived in smaller households, they were more likely to be White, less likely to have had a prior COVID infection, and quite a bit more likely to have at least one chronic condition.

166859_graphic1_web.jpg


To me, this constellation of factors describes a slightly higher-risk group; it makes sense that they were more likely to get the second vaccine.

Given those factors, what were the rates of COVID infection? After nearly a year of follow-up, around 15% of the kids who hadn’t received the bivalent vaccine got infected compared with 5% of the vaccinated kids. Symptomatic infections represented roughly half of all infections in both groups.

166859_graphic2_web.jpg


After adjustment for factors that differed between the groups, this difference translated into a vaccine efficacy of about 50% in this population. That’s our first data point. Yes, the bivalent vaccine worked. Not amazingly, of course. But it worked.

What about the kids who had had a prior COVID infection? Somewhat surprisingly, the vaccine was just as effective in this population, despite the fact that their immune systems already had some knowledge of COVID. Ten percent of unvaccinated kids got infected, even though they had been infected before. Just 2.5% of kids who received the bivalent vaccine got infected, suggesting some synergy between prior infection and vaccination.

166859_graphic3_web.jpg


These data suggest that the bivalent vaccine did reduce the risk for COVID infection in kids. All good. But the piece still missing is how severe these infections were. It doesn’t appear that any of the 426 infections documented in this study resulted in hospitalization or death, fortunately. And no data are presented on the incidence of multisystem inflammatory syndrome of children, though given the rarity, I’d be surprised if any of these kids have this either.

So where are we? Well, it seems that the narrative out there that says “the vaccines don’t work” or “the vaccines don’t work if you’ve already been infected” is probably not true. They do work. This study and others in adults show that. If they work to reduce infections, as this study shows, they will also work to reduce deaths. It’s just that death is fortunately so rare in children that the number needed to vaccinate to prevent one death is very large. In that situation, the decision to vaccinate comes down to the risks associated with vaccination. So far, those risk seem very minimal.

Perhaps falling into a flu-like yearly vaccination schedule is not simply the result of old habits dying hard. Maybe it’s actually not a bad idea.
 

Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>166859</fileName> <TBEID>0C04E732.SIG</TBEID> <TBUniqueIdentifier>MD_0C04E732</TBUniqueIdentifier> <newsOrJournal>News</newsOrJournal> <publisherName>Frontline Medical Communications</publisherName> <storyname/> <articleType>2</articleType> <TBLocation>QC Done-All Pubs</TBLocation> <QCDate>20240213T093826</QCDate> <firstPublished>20240213T154646</firstPublished> <LastPublished>20240213T154646</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20240213T154646</CMSDate> <articleSource/> <facebookInfo/> <meetingNumber/> <byline>F Perry Wilson</byline> <bylineText>F. PERRY WILSON, MD, MSCE</bylineText> <bylineFull>F. PERRY WILSON, MD, MSCE</bylineFull> <bylineTitleText/> <USOrGlobal/> <wireDocType/> <newsDocType>News</newsDocType> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:imng"> <name>IMNG Medical Media</name> <rightsInfo> <copyrightHolder> <name>Frontline Medical News</name> </copyrightHolder> <copyrightNotice>Copyright (c) 2015 Frontline Medical News, a Frontline Medical Communications Inc. company. All rights reserved. This material may not be published, broadcast, copied, or otherwise reproduced or distributed without the prior written permission of Frontline Medical Communications Inc.</copyrightNotice> </rightsInfo> </provider> <abstract/> <metaDescription>the coronavirus discovered in 2019 was novel — not just to the world, but to our own immune systems. It was different enough from its circulating relatives that</metaDescription> <articlePDF/> <teaserImage>300265</teaserImage> <teaser>Doctor discusses the progression of understanding of COVID-19 and current vaccines.</teaser> <title>Bivalent Vaccines Protect Even Children Who’ve Had COVID</title> <deck/> <disclaimer/> <AuthorList/> <articleURL/> <doi/> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear/> <pubPubdateMonth/> <pubPubdateDay/> <pubVolume/> <pubNumber/> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>chph</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>fp</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>im</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>idprac</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>pn</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> </publications_g> <publications> <term>6</term> <term>15</term> <term>21</term> <term canonical="true">20</term> <term>25</term> </publications> <sections> <term canonical="true">52</term> <term>41022</term> </sections> <topics> <term canonical="true">50347</term> <term>234</term> </topics> <links> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012669.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">JAMA</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/2401266a.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">JAMA</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/2401266b.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">JAMA</description> </link> </links> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>Bivalent Vaccines Protect Even Children Who’ve Had COVID</title> <deck/> </itemMeta> <itemContent> <p><br/><br/><em>This transcript has been edited for clarity</em>.<br/><br/>It was only 3 years ago when we called the pathogen we now refer to as the coronavirus “nCOV-19.” It was, in many ways, more descriptive than what we have today. The little “n” there stood for “novel” — and it was really that little “n” that caused us all the trouble.<br/><br/>You see, coronaviruses themselves were not really new to us. Understudied, perhaps, but with four strains running around the globe at any time giving rise to the common cold, these were viruses our bodies understood.<br/><br/>But <span class="tag metaDescription">the coronavirus discovered in 2019 was novel — not just to the world, but to our own immune systems. It was different enough from its circulating relatives that our immune memory cells failed to recognize it.</span> Instead of acting like a cold, it acted like nothing we had seen before, at least in our lifetime. The story of the pandemic is very much a bildungsroman of our immune systems — a story of how our immunity grew up.<br/><br/>The difference between the start of 2020 and now, when infections with the coronavirus remain common but not as deadly, can be measured in terms of immune education. Some of our immune systems were educated by infection, some by vaccination, and many by both.<br/><br/>When the first vaccines emerged in December 2020, the opportunity to educate our immune systems was still huge. Though, at the time, an estimated 20 million had been infected in the US and 350,000 had died, there was a large population that remained immunologically naive. I was one of them.<br/><br/>If 2020 into early 2021 was the era of immune education, the postvaccine period was the era of the variant. From one COVID strain to two, to five, to innumerable, our immune memory — trained on a specific version of the virus or its spike protein — became imperfect again. Not naive; these variants were not “novel” in the way COVID-19 was novel, but they were different. And different enough to cause infection.<br/><br/>Following the playbook of another virus that loves to come dressed up in different outfits, the <span class="Hyperlink">flu</span> virus, we find ourselves in the booster era — a world where yearly doses of a vaccine, ideally matched to the variants circulating when the vaccine is given, are the recommendation if not the norm.<br/><br/>But questions remain about the vaccination program, particularly around who should get it. And two populations with big question marks over their heads are (1) people who have already been infected and (2) kids, because their risk for bad outcomes is so much lower.<br/><br/>This week, we finally have some evidence that can shed light on these questions. The study under the spotlight is <span class="Hyperlink"><a href="https://jamanetwork.com/journals/jama/fullarticle/2814536">this one, appearing in</a></span> <em>JAMA</em>, which tries to analyze the ability of the bivalent vaccine — that’s the second one to come out, around September  2022 — to protect kids from COVID-19.<br/><br/>Now, right off the bat, this was not a randomized trial. The studies that established the viability of the mRNA vaccine platform were; they happened before the vaccine was authorized. But trials of the bivalent vaccine were <span class="Hyperlink"><a href="https://www.nejm.org/doi/full/10.1056/NEJMoa2208343">mostly limited to proving immune response, not protection from disease</a></span>.<br/><br/>Nevertheless, with some good observational methods and some statistics, we can try to tease out whether bivalent vaccines in kids worked.<br/><br/>The study combines three prospective cohort studies. The details are in the paper, but what you need to know is that the special sauce of these studies was that the kids were tested for COVID-19 on a weekly basis, whether they had symptoms or not. This is critical because asymptomatic infections can transmit COVID-19.<br/><br/>Let’s do the variables of interest. First and foremost, the bivalent vaccine. Some of these kids got the bivalent vaccine, some didn’t. Other key variables include prior vaccination with the monovalent vaccine. Some had been vaccinated with the monovalent vaccine before, some hadn’t. And, of course, prior infection. Some had been infected before (based on either nasal swabs or blood tests).<br/><br/>Let’s focus first on the primary exposure of interest: getting that bivalent vaccine. Again, this was not randomly assigned; kids who got the bivalent vaccine were different from those who did not. In general, they lived in smaller households, they were more likely to be White, less likely to have had a prior COVID infection, and quite a bit more likely to have at least one chronic condition.<br/><br/>[[{"fid":"300265","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"JAMA","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>To me, this constellation of factors describes a slightly higher-risk group; it makes sense that they were more likely to get the second vaccine.<br/><br/>Given those factors, what were the rates of COVID infection? After nearly a year of follow-up, around 15% of the kids who hadn’t received the bivalent vaccine got infected compared with 5% of the vaccinated kids. Symptomatic infections represented roughly half of all infections in both groups.<br/><br/>[[{"fid":"300266","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"JAMA","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>After adjustment for factors that differed between the groups, this difference translated into a vaccine efficacy of about 50% in this population. That’s our first data point. Yes, the bivalent vaccine worked. Not amazingly, of course. But it worked.<br/><br/>What about the kids who had had a prior COVID infection? Somewhat surprisingly, the vaccine was just as effective in this population, despite the fact that their immune systems already had some knowledge of COVID. Ten percent of unvaccinated kids got infected, even though they had been infected before. Just 2.5% of kids who received the bivalent vaccine got infected, suggesting some synergy between prior infection and vaccination.<br/><br/>[[{"fid":"300267","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"JAMA","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>These data suggest that the bivalent vaccine did reduce the risk for COVID infection in kids. All good. But the piece still missing is how severe these infections were. It doesn’t appear that any of the 426 infections documented in this study resulted in hospitalization or death, fortunately. And no data are presented on the incidence of multisystem inflammatory syndrome of children, though given the rarity, I’d be surprised if any of these kids have this either.<br/><br/>So where are we? Well, it seems that the narrative out there that says “the vaccines don’t work” or “the vaccines don’t work if you’ve already been infected” is probably not true. They do work. This study and <span class="Hyperlink"><a href="https://www.thelancet.com/journals/laninf/article/PIIS1473-3099(23)00122-6/fulltext">others in adults show that</a></span>. If they work to reduce infections, as this study shows, they will also work to reduce deaths. It’s just that death is fortunately so rare in children that the number needed to vaccinate to prevent one death is very large. In that situation, the decision to vaccinate comes down to the risks associated with vaccination. So far, those risk seem very minimal.<br/><br/>Perhaps falling into a flu-like yearly vaccination schedule is not simply the result of old habits dying hard. Maybe it’s actually not a bad idea.<br/><br/></p> <p> <em>Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.</em> </p> <p> <em>A version of this article appeared on <span class="Hyperlink"><a href="https://www.medscape.com/viewarticle/1000065">Medscape.com</a></span>.</em> </p> </itemContent> </newsItem> <newsItem> <itemMeta> <itemRole>teaser</itemRole> <itemClass>text</itemClass> <title/> <deck/> </itemMeta> <itemContent> </itemContent> </newsItem> </itemSet></root>
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article