Facial Temperature Can Reveal Age and Disease

Article Type
Changed
Wed, 07/03/2024 - 11:08

 

This transcript has been edited for clarity. 

My oldest daughter is at sleepaway camp for a couple of weeks, and the camp has a photographer who goes around all day taking pictures of the kids, which get uploaded to a private Facebook group. In the past, I would go online every day (or, okay, several times a day) and scroll through all those pictures looking for one that features my kid. 

I don’t have to do that anymore. This year, I simply uploaded a picture of my daughter to an app and artificial intelligence (AI) takes care of the rest, recognizing her face amidst the sea of smiling children, and flagging just those photos for me to peruse. It’s amazing, really. And a bit scary.

The fact that facial recognition has penetrated the summer camp market should tell you that the tech is truly ubiquitous. But today we’re going to think a bit more about what AI can do with a picture of your face, because the power of facial recognition is not just skin deep.

What’s got me hot and bothered about facial images is this paper, appearing in Cell Metabolism, which adds a new layer to the standard facial-analysis playbook: facial temperature.

To understand this paper, you need to understand a whole field of research that is developing various different “clocks” for age. 

It turns out that age really is just a number. Our cells, our proteins, our biochemistry can be analyzed to give different numbers. These “clocks,” as distinct from the calendar we usually use to measure our age, might have more predictive power than the number itself. 

There are numerous molecular clocks, such as telomere length, that not only correlate with calendar age but are superior to calendar age in predicting age-related complications. Testing telomere length typically requires a blood sample — and remains costly. But we can use other sources to estimate age; how about a photo?

I mean, we do this all the time when we meet someone new or, as a physician, when we meet a new patient. I have often written that a patient “appears younger than their stated age,” and we’ve all had the experience of hearing how old someone is and being shocked. I mean, have you seen Sharon Stone recently? She’s 66 years old. Okay — to be fair, there might be some outside help there. But you get the point.

Back to the Cell Metabolism paper. Researchers report on multiple algorithms to obtain an “age” from a picture of an individual’s face. 

The first algorithm is pretty straightforward. Researchers collected 2811 images, all of Han Chinese individuals ranging in age from 20 to 90 years, and reconstructed a 3D facial map from those. 

memubrocheposwowrutaphewrowrimijebrauulathaleswot


They then trained a convolutional neural network to predict the individuals’ ages from the pictures. It was quite accurate, as you can see here.

prosterechiwremedrijatreclewrivudruwabruluphestespepustostuwruslunusliprukibejifrireshichethupiphemohothastesiwepriseputreualachigovusheruchodrenespestestepebeniphesivuchabrewrajichuphadrililicludrop


In the AI age, this may not seem that impressive. A brief search online turned up dozens of apps that promised to guess my age from a photo.

I sent this rather unflattering picture of myself to ChatGPT which, after initially demurring and saying it was not designed to guess ages, pegged me at somewhere between 35 and 45, which I am taking as a major victory.

phatritribeuacrispicishabrejajuchephebrihedricevoshavivevospitheshuvotretegutrobujawoclodrirekuboswestekiclamacrawrijaveuithejabruvasistebrasicrugonetrowowuuishecropholiclumabristanawreswushislebaphaprihewreshagubreshiclicredrugostubrilegatrid


But the Cell Metabolism paper goes deeper. Literally. They added a new dimension to facial image analysis by taking an individual’s temperature using a thermal scanning camera that provided temperatures at 54 different landmarks across the face.

vatuthoviclobrauucushosudruswudevudrocetujoshoclouibastouetomacafrepreduprehachiwrewiprespirucacleslestisusposholemislunauutrocetetristiclejagewrehisloslecrithewru


And this is where things start to get interesting. Because sure, the visible part of your face can change depending on makeup, expression, plastic surgery, and the like. But the temperature? That’s harder to fake.

It turns out that the temperature distribution in your face changes as you get older. There is a cooling of the nose and the cheeks, for example.

thiprapesliposhuphidracloricucluwrispichikilatrehuuubiphadrumuslihouudrupakaprochadidrebijoj


And the researchers could combine all this temperature data to guess someone’s calendar age fairly accurately, though notably not as accurately as the model that just looks at the pictures.

be


But guessing your age is not really the interesting part of thermal imaging of the face. It’s guessing — or, rather, predicting — the state of your metabolism. All these study participants had extensive metabolic testing performed, as well as detailed analysis of their lifestyle behaviors. And facial images could be used to predict those factors.

For example, the 3D reconstruction of the faces could predict who ate seafood (they tend to look younger than their actual age) compared with who ate poultry and meat (they tend to look older). The thermal imaging could predict who got more sleep (they look younger from a temperature perspective) and who ate more yogurt (also younger-appearing, temperature-wise). Facial temperature patterns could identify those with higher BMI, higher blood pressure, higher fasting glucose. 

The researchers used the difference between actual and predicted age as a metric to measure illness as well. You can see here how, on average, individuals with hypertension, diabetes, and even liver cysts are “older,” at least by face temperature.

swiprabuwroceslidrowruthoshodrulapropithuuucapilechuchoribubrathujireswovifrophuswemaslowruswijajadihogaspabestetrigupreclekushefrowaphowruspupuvatrataspeprolidecuwrowrafratejithucawodisowetinetodreche


It may even be possible to use facial temperature as biofeedback. In a small study, the researchers measured the difference between facial temperature age and real age before and after 2 weeks of jump-roping. It turns out that 2 weeks of jump-roping can make you look about 5 years younger, at least as judged by a thermal camera. Or like the Predator.

br


Okay, this is all very cool, but I’m not saying we’ll all be doing facial temperature tests in the near future. No; what this study highlights for me is how much information about ourselves is available to those who know how to decode it. Maybe those data come from the wrinkles in our faces, or the angles of our smiles, or the speed with which we type, or the temperature of our elbows. The data have always been there, actually, but we’ve never had the tools powerful enough to analyze them until now.

When I was a kid, I was obsessed with Star Trek — I know, you’re shocked — and, of course, the famous tricorder, a scanner that could tell everything about someone’s state of health in 5 seconds from 3 feet away. That’s how I thought medicine really would be in the future. Once I got to medical school, I was disabused of that notion. But the age of data, the age of AI, may mean the tricorder age is not actually that far away.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

This transcript has been edited for clarity. 

My oldest daughter is at sleepaway camp for a couple of weeks, and the camp has a photographer who goes around all day taking pictures of the kids, which get uploaded to a private Facebook group. In the past, I would go online every day (or, okay, several times a day) and scroll through all those pictures looking for one that features my kid. 

I don’t have to do that anymore. This year, I simply uploaded a picture of my daughter to an app and artificial intelligence (AI) takes care of the rest, recognizing her face amidst the sea of smiling children, and flagging just those photos for me to peruse. It’s amazing, really. And a bit scary.

The fact that facial recognition has penetrated the summer camp market should tell you that the tech is truly ubiquitous. But today we’re going to think a bit more about what AI can do with a picture of your face, because the power of facial recognition is not just skin deep.

What’s got me hot and bothered about facial images is this paper, appearing in Cell Metabolism, which adds a new layer to the standard facial-analysis playbook: facial temperature.

To understand this paper, you need to understand a whole field of research that is developing various different “clocks” for age. 

It turns out that age really is just a number. Our cells, our proteins, our biochemistry can be analyzed to give different numbers. These “clocks,” as distinct from the calendar we usually use to measure our age, might have more predictive power than the number itself. 

There are numerous molecular clocks, such as telomere length, that not only correlate with calendar age but are superior to calendar age in predicting age-related complications. Testing telomere length typically requires a blood sample — and remains costly. But we can use other sources to estimate age; how about a photo?

I mean, we do this all the time when we meet someone new or, as a physician, when we meet a new patient. I have often written that a patient “appears younger than their stated age,” and we’ve all had the experience of hearing how old someone is and being shocked. I mean, have you seen Sharon Stone recently? She’s 66 years old. Okay — to be fair, there might be some outside help there. But you get the point.

Back to the Cell Metabolism paper. Researchers report on multiple algorithms to obtain an “age” from a picture of an individual’s face. 

The first algorithm is pretty straightforward. Researchers collected 2811 images, all of Han Chinese individuals ranging in age from 20 to 90 years, and reconstructed a 3D facial map from those. 

memubrocheposwowrutaphewrowrimijebrauulathaleswot


They then trained a convolutional neural network to predict the individuals’ ages from the pictures. It was quite accurate, as you can see here.

prosterechiwremedrijatreclewrivudruwabruluphestespepustostuwruslunusliprukibejifrireshichethupiphemohothastesiwepriseputreualachigovusheruchodrenespestestepebeniphesivuchabrewrajichuphadrililicludrop


In the AI age, this may not seem that impressive. A brief search online turned up dozens of apps that promised to guess my age from a photo.

I sent this rather unflattering picture of myself to ChatGPT which, after initially demurring and saying it was not designed to guess ages, pegged me at somewhere between 35 and 45, which I am taking as a major victory.

phatritribeuacrispicishabrejajuchephebrihedricevoshavivevospitheshuvotretegutrobujawoclodrirekuboswestekiclamacrawrijaveuithejabruvasistebrasicrugonetrowowuuishecropholiclumabristanawreswushislebaphaprihewreshagubreshiclicredrugostubrilegatrid


But the Cell Metabolism paper goes deeper. Literally. They added a new dimension to facial image analysis by taking an individual’s temperature using a thermal scanning camera that provided temperatures at 54 different landmarks across the face.

vatuthoviclobrauucushosudruswudevudrocetujoshoclouibastouetomacafrepreduprehachiwrewiprespirucacleslestisusposholemislunauutrocetetristiclejagewrehisloslecrithewru


And this is where things start to get interesting. Because sure, the visible part of your face can change depending on makeup, expression, plastic surgery, and the like. But the temperature? That’s harder to fake.

It turns out that the temperature distribution in your face changes as you get older. There is a cooling of the nose and the cheeks, for example.

thiprapesliposhuphidracloricucluwrispichikilatrehuuubiphadrumuslihouudrupakaprochadidrebijoj


And the researchers could combine all this temperature data to guess someone’s calendar age fairly accurately, though notably not as accurately as the model that just looks at the pictures.

be


But guessing your age is not really the interesting part of thermal imaging of the face. It’s guessing — or, rather, predicting — the state of your metabolism. All these study participants had extensive metabolic testing performed, as well as detailed analysis of their lifestyle behaviors. And facial images could be used to predict those factors.

For example, the 3D reconstruction of the faces could predict who ate seafood (they tend to look younger than their actual age) compared with who ate poultry and meat (they tend to look older). The thermal imaging could predict who got more sleep (they look younger from a temperature perspective) and who ate more yogurt (also younger-appearing, temperature-wise). Facial temperature patterns could identify those with higher BMI, higher blood pressure, higher fasting glucose. 

The researchers used the difference between actual and predicted age as a metric to measure illness as well. You can see here how, on average, individuals with hypertension, diabetes, and even liver cysts are “older,” at least by face temperature.

swiprabuwroceslidrowruthoshodrulapropithuuucapilechuchoribubrathujireswovifrophuswemaslowruswijajadihogaspabestetrigupreclekushefrowaphowruspupuvatrataspeprolidecuwrowrafratejithucawodisowetinetodreche


It may even be possible to use facial temperature as biofeedback. In a small study, the researchers measured the difference between facial temperature age and real age before and after 2 weeks of jump-roping. It turns out that 2 weeks of jump-roping can make you look about 5 years younger, at least as judged by a thermal camera. Or like the Predator.

br


Okay, this is all very cool, but I’m not saying we’ll all be doing facial temperature tests in the near future. No; what this study highlights for me is how much information about ourselves is available to those who know how to decode it. Maybe those data come from the wrinkles in our faces, or the angles of our smiles, or the speed with which we type, or the temperature of our elbows. The data have always been there, actually, but we’ve never had the tools powerful enough to analyze them until now.

When I was a kid, I was obsessed with Star Trek — I know, you’re shocked — and, of course, the famous tricorder, a scanner that could tell everything about someone’s state of health in 5 seconds from 3 feet away. That’s how I thought medicine really would be in the future. Once I got to medical school, I was disabused of that notion. But the age of data, the age of AI, may mean the tricorder age is not actually that far away.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

 

This transcript has been edited for clarity. 

My oldest daughter is at sleepaway camp for a couple of weeks, and the camp has a photographer who goes around all day taking pictures of the kids, which get uploaded to a private Facebook group. In the past, I would go online every day (or, okay, several times a day) and scroll through all those pictures looking for one that features my kid. 

I don’t have to do that anymore. This year, I simply uploaded a picture of my daughter to an app and artificial intelligence (AI) takes care of the rest, recognizing her face amidst the sea of smiling children, and flagging just those photos for me to peruse. It’s amazing, really. And a bit scary.

The fact that facial recognition has penetrated the summer camp market should tell you that the tech is truly ubiquitous. But today we’re going to think a bit more about what AI can do with a picture of your face, because the power of facial recognition is not just skin deep.

What’s got me hot and bothered about facial images is this paper, appearing in Cell Metabolism, which adds a new layer to the standard facial-analysis playbook: facial temperature.

To understand this paper, you need to understand a whole field of research that is developing various different “clocks” for age. 

It turns out that age really is just a number. Our cells, our proteins, our biochemistry can be analyzed to give different numbers. These “clocks,” as distinct from the calendar we usually use to measure our age, might have more predictive power than the number itself. 

There are numerous molecular clocks, such as telomere length, that not only correlate with calendar age but are superior to calendar age in predicting age-related complications. Testing telomere length typically requires a blood sample — and remains costly. But we can use other sources to estimate age; how about a photo?

I mean, we do this all the time when we meet someone new or, as a physician, when we meet a new patient. I have often written that a patient “appears younger than their stated age,” and we’ve all had the experience of hearing how old someone is and being shocked. I mean, have you seen Sharon Stone recently? She’s 66 years old. Okay — to be fair, there might be some outside help there. But you get the point.

Back to the Cell Metabolism paper. Researchers report on multiple algorithms to obtain an “age” from a picture of an individual’s face. 

The first algorithm is pretty straightforward. Researchers collected 2811 images, all of Han Chinese individuals ranging in age from 20 to 90 years, and reconstructed a 3D facial map from those. 

memubrocheposwowrutaphewrowrimijebrauulathaleswot


They then trained a convolutional neural network to predict the individuals’ ages from the pictures. It was quite accurate, as you can see here.

prosterechiwremedrijatreclewrivudruwabruluphestespepustostuwruslunusliprukibejifrireshichethupiphemohothastesiwepriseputreualachigovusheruchodrenespestestepebeniphesivuchabrewrajichuphadrililicludrop


In the AI age, this may not seem that impressive. A brief search online turned up dozens of apps that promised to guess my age from a photo.

I sent this rather unflattering picture of myself to ChatGPT which, after initially demurring and saying it was not designed to guess ages, pegged me at somewhere between 35 and 45, which I am taking as a major victory.

phatritribeuacrispicishabrejajuchephebrihedricevoshavivevospitheshuvotretegutrobujawoclodrirekuboswestekiclamacrawrijaveuithejabruvasistebrasicrugonetrowowuuishecropholiclumabristanawreswushislebaphaprihewreshagubreshiclicredrugostubrilegatrid


But the Cell Metabolism paper goes deeper. Literally. They added a new dimension to facial image analysis by taking an individual’s temperature using a thermal scanning camera that provided temperatures at 54 different landmarks across the face.

vatuthoviclobrauucushosudruswudevudrocetujoshoclouibastouetomacafrepreduprehachiwrewiprespirucacleslestisusposholemislunauutrocetetristiclejagewrehisloslecrithewru


And this is where things start to get interesting. Because sure, the visible part of your face can change depending on makeup, expression, plastic surgery, and the like. But the temperature? That’s harder to fake.

It turns out that the temperature distribution in your face changes as you get older. There is a cooling of the nose and the cheeks, for example.

thiprapesliposhuphidracloricucluwrispichikilatrehuuubiphadrumuslihouudrupakaprochadidrebijoj


And the researchers could combine all this temperature data to guess someone’s calendar age fairly accurately, though notably not as accurately as the model that just looks at the pictures.

be


But guessing your age is not really the interesting part of thermal imaging of the face. It’s guessing — or, rather, predicting — the state of your metabolism. All these study participants had extensive metabolic testing performed, as well as detailed analysis of their lifestyle behaviors. And facial images could be used to predict those factors.

For example, the 3D reconstruction of the faces could predict who ate seafood (they tend to look younger than their actual age) compared with who ate poultry and meat (they tend to look older). The thermal imaging could predict who got more sleep (they look younger from a temperature perspective) and who ate more yogurt (also younger-appearing, temperature-wise). Facial temperature patterns could identify those with higher BMI, higher blood pressure, higher fasting glucose. 

The researchers used the difference between actual and predicted age as a metric to measure illness as well. You can see here how, on average, individuals with hypertension, diabetes, and even liver cysts are “older,” at least by face temperature.

swiprabuwroceslidrowruthoshodrulapropithuuucapilechuchoribubrathujireswovifrophuswemaslowruswijajadihogaspabestetrigupreclekushefrowaphowruspupuvatrataspeprolidecuwrowrafratejithucawodisowetinetodreche


It may even be possible to use facial temperature as biofeedback. In a small study, the researchers measured the difference between facial temperature age and real age before and after 2 weeks of jump-roping. It turns out that 2 weeks of jump-roping can make you look about 5 years younger, at least as judged by a thermal camera. Or like the Predator.

br


Okay, this is all very cool, but I’m not saying we’ll all be doing facial temperature tests in the near future. No; what this study highlights for me is how much information about ourselves is available to those who know how to decode it. Maybe those data come from the wrinkles in our faces, or the angles of our smiles, or the speed with which we type, or the temperature of our elbows. The data have always been there, actually, but we’ve never had the tools powerful enough to analyze them until now.

When I was a kid, I was obsessed with Star Trek — I know, you’re shocked — and, of course, the famous tricorder, a scanner that could tell everything about someone’s state of health in 5 seconds from 3 feet away. That’s how I thought medicine really would be in the future. Once I got to medical school, I was disabused of that notion. But the age of data, the age of AI, may mean the tricorder age is not actually that far away.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>168617</fileName> <TBEID>0C050DA1.SIG</TBEID> <TBUniqueIdentifier>MD_0C050DA1</TBUniqueIdentifier> <newsOrJournal>News</newsOrJournal> <publisherName>Frontline Medical Communications</publisherName> <storyname/> <articleType>2</articleType> <TBLocation>QC Done-All Pubs</TBLocation> <QCDate>20240703T105442</QCDate> <firstPublished>20240703T105748</firstPublished> <LastPublished>20240703T105748</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20240703T105748</CMSDate> <articleSource/> <facebookInfo/> <meetingNumber/> <byline>F Perry Wilson</byline> <bylineText>F. PERRY WILSON, MSCE, MD</bylineText> <bylineFull>F. PERRY WILSON, MSCE, MD</bylineFull> <bylineTitleText/> <USOrGlobal/> <wireDocType/> <newsDocType>News</newsDocType> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:imng"> <name>IMNG Medical Media</name> <rightsInfo> <copyrightHolder> <name>Frontline Medical News</name> </copyrightHolder> <copyrightNotice>Copyright (c) 2015 Frontline Medical News, a Frontline Medical Communications Inc. company. All rights reserved. This material may not be published, broadcast, copied, or otherwise reproduced or distributed without the prior written permission of Frontline Medical Communications Inc.</copyrightNotice> </rightsInfo> </provider> <abstract/> <metaDescription>They added a new dimension to facial image analysis by taking an individual’s temperature using a thermal scanning camera that provided temperatures at 54 diffe</metaDescription> <articlePDF/> <teaserImage>302146</teaserImage> <teaser>Research points to use of facial temperature to determine age, health, diet, and sleep, says physician.</teaser> <title>Facial Temperature Can Reveal Age and Disease</title> <deck/> <disclaimer/> <AuthorList/> <articleURL/> <doi/> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear/> <pubPubdateMonth/> <pubPubdateDay/> <pubVolume/> <pubNumber/> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>im</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>fp</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>card</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>chph</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>endo</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> </publications_g> <publications> <term canonical="true">21</term> <term>15</term> <term>5</term> <term>6</term> <term>34</term> </publications> <sections> <term>39313</term> <term canonical="true">52</term> </sections> <topics> <term>194</term> <term>205</term> <term>213</term> <term>226</term> <term canonical="true">280</term> <term>229</term> </topics> <links> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012a87.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Cell Metabolism</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012a88.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Cell Metabolism</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012a89.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Dr. Wilson</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012a8a.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Cell Metabolism</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012a8b.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Cell Metabolism</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012a8c.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Cell Metabolism</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012a8d.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Cell Metabolism</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012a8e.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Cell Metabolism</description> </link> </links> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>Facial Temperature Can Reveal Age and Disease</title> <deck/> </itemMeta> <itemContent> <p> <em>This transcript has been edited for clarity. </em> </p> <p>My oldest daughter is at sleepaway camp for a couple of weeks, and the camp has a photographer who goes around all day taking pictures of the kids, which get uploaded to a private Facebook group. In the past, I would go online every day (or, okay, several times a day) and scroll through all those pictures looking for one that features my kid. </p> <p>I don’t have to do that anymore. This year, I simply uploaded a picture of my daughter to an app and artificial intelligence (AI) takes care of the rest, recognizing her face amidst the sea of smiling children, and flagging just those photos for me to peruse. It’s amazing, really. And a bit scary.<br/><br/>The fact that facial recognition has penetrated the summer camp market should tell you that the tech is truly ubiquitous. But today we’re going to think a bit more about what AI can do with a picture of your face, because the power of facial recognition is not just skin deep.<br/><br/>What’s got me hot and bothered about facial images is <span class="Hyperlink"><a href="https://www.cell.com/cell-metabolism/fulltext/S1550-4131(24)00188-8">this paper</a></span>, appearing in <em>Cell Metabolism</em>, which adds a new layer to the standard facial-analysis playbook: facial temperature.<br/><br/>To understand this paper, you need to understand a whole field of research that is developing various different “clocks” for age. <br/><br/>It turns out that age really is just a number. Our cells, our proteins, our biochemistry can be analyzed to give different numbers. These “clocks,” as distinct from the calendar we usually use to measure our age, might have more predictive power than the number itself. <br/><br/>There are numerous molecular clocks, such as telomere length, that not only correlate with calendar age but are <a href="https://www.frontiersin.org/journals/genetics/articles/10.3389/fgene.2020.630186/full">superior to calendar age in predicting age-related complications</a>. Testing telomere length typically requires a blood sample — and remains costly. But we can use other sources to estimate age; how about a photo?<br/><br/>I mean, we do this all the time when we meet someone new or, as a physician, when we meet a new patient. I have often written that a patient “appears younger than their stated age,” and we’ve all had the experience of hearing how old someone is and being shocked. I mean, have you seen <a href="https://people.com/sharon-stone-talks-aging-66th-birthday-i-like-being-alive-and-healthy-8550275">Sharon Stone</a> recently? She’s 66 years old. Okay — to be fair, there might be some outside help there. But you get the point.<br/><br/>Back to the <em>Cell Metabolism</em> paper. Researchers report on multiple algorithms to obtain an “age” from a picture of an individual’s face. <br/><br/>The first algorithm is pretty straightforward. Researchers collected 2811 images, all of Han Chinese individuals ranging in age from 20 to 90 years, and reconstructed a 3D facial map from those. <br/><br/>[[{"fid":"302146","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"Cell Metabolism","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>They then trained a convolutional neural network to predict the individuals’ ages from the pictures. It was quite accurate, <a href="https://www.nature.com/articles/s42255-020-00270-x/figures/1">as you can see here</a>.<br/><br/>[[{"fid":"302147","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"Cell Metabolism","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>In the AI age, this may not seem that impressive. A brief search online turned up dozens of apps that promised to guess my age from a photo.<br/><br/>I sent this rather unflattering picture of myself to ChatGPT which, after initially demurring and saying it was not designed to guess ages, pegged me at somewhere between 35 and 45, which I am taking as a major victory.<br/><br/>[[{"fid":"302148","view_mode":"medstat_image_flush_right","fields":{"format":"medstat_image_flush_right","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"Dr. Wilson","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_flush_right"}}]]<br/><br/>But the Cell Metabolism paper goes deeper. Literally. <span class="tag metaDescription">They added a new dimension to facial image analysis by taking an individual’s temperature using a thermal scanning camera that provided temperatures at 54 different landmarks across the face.</span><br/><br/>[[{"fid":"302149","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"Cell Metabolism","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>And this is where things start to get interesting. Because sure, the visible part of your face can change depending on makeup, expression, plastic surgery, and the like. But the temperature? That’s harder to fake.<br/><br/>It turns out that the temperature distribution in your face changes as you get older. There is a cooling of the nose and the cheeks, for example.<br/><br/>[[{"fid":"302150","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"Cell Metabolism","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>And the researchers could combine all this temperature data to guess someone’s calendar age fairly accurately, though notably not as accurately as the model that just looks at the pictures.<br/><br/>[[{"fid":"302151","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"Cell Metabolism","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>But guessing your age is not really the interesting part of thermal imaging of the face. It’s guessing — or, rather, predicting — the state of your metabolism. All these study participants had extensive metabolic testing performed, as well as detailed analysis of their lifestyle behaviors. And facial images could be used to predict those factors.<br/><br/>For example, the 3D reconstruction of the faces could predict who ate seafood (they tend to look younger than their actual age) compared with who ate poultry and meat (they tend to look older). The thermal imaging could predict who got more sleep (they look younger from a temperature perspective) and who ate more yogurt (also younger-appearing, temperature-wise). Facial temperature patterns could identify those with higher BMI, higher blood pressure, higher fasting glucose. <br/><br/>The researchers used the difference between actual and predicted age as a metric to measure illness as well. You can see here how, on average, individuals with <a href="https://emedicine.medscape.com/article/241381-overview">hypertension</a>, diabetes, and even liver cysts are “older,” at least by face temperature.<br/><br/>[[{"fid":"302152","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"Cell Metabolism","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>It may even be possible to use facial temperature as biofeedback. In a small study, the researchers measured the difference between facial temperature age and real age before and after 2 weeks of jump-roping. It turns out that 2 weeks of jump-roping can make you look about 5 years younger, at least as judged by a thermal camera. Or like the Predator.<br/><br/>[[{"fid":"302153","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"Cell Metabolism","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>Okay, this is all very cool, but I’m not saying we’ll all be doing facial temperature tests in the near future. No; what this study highlights for me is how much information about ourselves is available to those who know how to decode it. Maybe those data come from the wrinkles in our faces, or the angles of our smiles, or the speed with which we type, or the temperature of our elbows. The data have always been there, actually, but we’ve never had the tools powerful enough to analyze them until now.<br/><br/>When I was a kid, I was obsessed with Star Trek — I know, you’re shocked — and, of course, the famous tricorder, a scanner that could tell everything about someone’s state of health in 5 seconds from 3 feet away. That’s how I thought medicine really would be in the future. Once I got to medical school, I was disabused of that notion. But the age of data, the age of AI, may mean the tricorder age is not actually that far away.<br/><br/></p> <p> <em>Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.</em> </p> <p> <em>A version of this article first appeared on <span class="Hyperlink"><a href="https://www.medscape.com/viewarticle/how-facial-temperature-reveals-age-and-disease-2024a1000c73">Medscape.com</a></span>.</em> </p> </itemContent> </newsItem> <newsItem> <itemMeta> <itemRole>teaser</itemRole> <itemClass>text</itemClass> <title/> <deck/> </itemMeta> <itemContent> </itemContent> </newsItem> </itemSet></root>
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Long COVID Can’t Be Solved Until We Decide What It Is

Article Type
Changed
Thu, 06/20/2024 - 10:25

This transcript has been edited for clarity. 

I want to help people suffering from long COVID as much as anyone. But we have a real problem. In brief, we are being too inclusive. The first thing you learn, when you start studying the epidemiology of diseases, is that you need a good case definition. And our case definition for long COVID sucks. Just last week, the National Academies of Sciences, Engineering, and Medicine (NASEM) issued a definition of long COVID with the aim of “improving consistency, documentation, and treatment.” Good news, right? Here’s the definition: “Long COVID is an infection-associated chronic condition that occurs after SARS-CoV-2 infection and is present for at least 3 months as a continuous, relapsing and remitting, or progressive disease state that affects one or more organ systems.” 

This is not helpful. The symptoms can be in any organ system, can be continuous or relapsing and remitting. Basically, if you’ve had COVID — and essentially all of us have by now — and you have any symptom, even one that comes and goes, 3 months after that, it’s long COVID. They don’t even specify that it has to be a new symptom.

I’m not saying that long COVID doesn’t exist. I’m not saying it isn’t weird or that it can’t present in diverse ways. But a case definition like this hinders our ability to figure out exactly what is going on and to identify good treatments. It mixes people with real long COVID with a ton of other people, diluting our power to do science on the condition. And I have sort of a case study in this problem today, based on a paper getting a lot of press suggesting that one out of every five people has long COVID.

We are talking about this study, “Epidemiologic Features of Recovery From SARS-CoV-2 Infection,” appearing in JAMA Network Open this week. While I think the idea is important, the study really highlights why it can be so hard to study long COVID. 

As part of efforts to understand long COVID, the National Institutes of Health (NIH) leveraged 14 of its ongoing cohort studies. The NIH has multiple longitudinal cohort studies that follow various groups of people over time. You may have heard of the REGARDS study, for example, which focuses on cardiovascular risks to people living in the southern United States. Or the ARIC study, which followed adults in four communities across the United States for the development of heart disease. All 14 of the cohorts in this study are long-running projects with ongoing data collection. So, it was not a huge lift to add some questions to the yearly surveys and studies the participants were already getting.

To wit: “Do you think that you have had COVID-19?” and “Would you say that you are completely recovered now?” Those who said they weren’t fully recovered were asked how long it had been since their infection, and anyone who answered with a duration > 90 days was considered to have long COVID.

So, we have self-report of infection, self-report of duration of symptoms, and self-report of recovery. This is fine, of course; individuals’ perceptions of their own health are meaningful. But the vagaries inherent in those perceptions are going to muddy the waters as we attempt to discover the true nature of the long COVID syndrome.

But let’s look at some results. Out of 4708 individuals studied, 842 (17.9%) had not recovered by 90 days.

phubracrocres


This study included not only people hospitalized with COVID, as some prior long COVID studies did, but people self-diagnosed, tested at home, etc. This estimate is as reflective of the broader US population as we can get. 

And there are some interesting trends here.

Recovery time was longer in the first waves of COVID than in the Omicron wave.

spouupewisladotriuidawochelinisarunothiclitilaswicespushetathilistocritrucifrevachamucabaswaluswobilistudrubriwrothihegethotrurivucrehespivobruchocroswephikuphochikeuoditouidrajanurutrucluspofrecrijabrotachotheswitruc


Recovery times were longer for smokers, those with diabetes, and those who were obese.

wrecroswuslabuphapheshephubophecrophaphupesafruliclicrisuswuwiphetrutushucrolephecinofrauohucrofruwestathistokowrajeshoteclistuwrutribopreuehavo


Recovery times were longer if the disease was more severe, in general. Though there is an unusual finding that women had longer recovery times despite their lower average severity of illness.

168476_image4_web.JPG


Vaccination was associated with shorter recovery times, as you can see here. 

168476_image5_web.JPG


This is all quite interesting. It’s clear that people feel they are sick for a while after COVID. But we need to understand whether these symptoms are due to the lingering effects of a bad infection that knocks you down a peg, or to an ongoing syndrome — this thing we call long COVID — that has a physiologic basis and thus can be treated. And this study doesn’t help us much with that.

Not that this was the authors’ intention. This is a straight-up epidemiology study. But the problem is deeper than that. Let’s imagine that you want to really dig into this long COVID thing and get blood samples from people with it, ideally from controls with some other respiratory virus infection, and do all kinds of genetic and proteomic studies and stuff to really figure out what’s going on. Who do you enroll to be in the long COVID group? Do you enroll anyone who says they had COVID and still has some symptom more than 90 days after? You are going to find an awful lot of eligible people, and I guarantee that if there is a pathognomonic signature of long COVID, not all of them will have it.

And what about other respiratory viruses? This study in The Lancet Infectious Diseases compared long-term outcomes among hospitalized patients with COVID vs influenza. In general, the COVID outcomes are worse, but let’s not knock the concept of “long flu.” Across the board, roughly 50% of people report symptoms across any given organ system.

168476_image6_web.JPG


What this is all about is something called misclassification bias, a form of information bias that arises in a study where you label someone as diseased when they are not, or vice versa. If this happens at random, it’s bad; you’ve lost your ability to distinguish characteristics from the diseased and nondiseased population.

When it’s not random, it’s really bad. If we are more likely to misclassify women as having long COVID, for example, then it will appear that long COVID is more likely among women, or more likely among those with higher estrogen levels, or something. And that might simply be wrong.

I’m not saying that’s what happened here; this study does a really great job of what it set out to do, which was to describe the patterns of lingering symptoms after COVID. But we are not going to make progress toward understanding long COVID until we are less inclusive with our case definition. To paraphrase Syndrome from The Incredibles: If everyone has long COVID, then no one does. 
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity. 

I want to help people suffering from long COVID as much as anyone. But we have a real problem. In brief, we are being too inclusive. The first thing you learn, when you start studying the epidemiology of diseases, is that you need a good case definition. And our case definition for long COVID sucks. Just last week, the National Academies of Sciences, Engineering, and Medicine (NASEM) issued a definition of long COVID with the aim of “improving consistency, documentation, and treatment.” Good news, right? Here’s the definition: “Long COVID is an infection-associated chronic condition that occurs after SARS-CoV-2 infection and is present for at least 3 months as a continuous, relapsing and remitting, or progressive disease state that affects one or more organ systems.” 

This is not helpful. The symptoms can be in any organ system, can be continuous or relapsing and remitting. Basically, if you’ve had COVID — and essentially all of us have by now — and you have any symptom, even one that comes and goes, 3 months after that, it’s long COVID. They don’t even specify that it has to be a new symptom.

I’m not saying that long COVID doesn’t exist. I’m not saying it isn’t weird or that it can’t present in diverse ways. But a case definition like this hinders our ability to figure out exactly what is going on and to identify good treatments. It mixes people with real long COVID with a ton of other people, diluting our power to do science on the condition. And I have sort of a case study in this problem today, based on a paper getting a lot of press suggesting that one out of every five people has long COVID.

We are talking about this study, “Epidemiologic Features of Recovery From SARS-CoV-2 Infection,” appearing in JAMA Network Open this week. While I think the idea is important, the study really highlights why it can be so hard to study long COVID. 

As part of efforts to understand long COVID, the National Institutes of Health (NIH) leveraged 14 of its ongoing cohort studies. The NIH has multiple longitudinal cohort studies that follow various groups of people over time. You may have heard of the REGARDS study, for example, which focuses on cardiovascular risks to people living in the southern United States. Or the ARIC study, which followed adults in four communities across the United States for the development of heart disease. All 14 of the cohorts in this study are long-running projects with ongoing data collection. So, it was not a huge lift to add some questions to the yearly surveys and studies the participants were already getting.

To wit: “Do you think that you have had COVID-19?” and “Would you say that you are completely recovered now?” Those who said they weren’t fully recovered were asked how long it had been since their infection, and anyone who answered with a duration > 90 days was considered to have long COVID.

So, we have self-report of infection, self-report of duration of symptoms, and self-report of recovery. This is fine, of course; individuals’ perceptions of their own health are meaningful. But the vagaries inherent in those perceptions are going to muddy the waters as we attempt to discover the true nature of the long COVID syndrome.

But let’s look at some results. Out of 4708 individuals studied, 842 (17.9%) had not recovered by 90 days.

phubracrocres


This study included not only people hospitalized with COVID, as some prior long COVID studies did, but people self-diagnosed, tested at home, etc. This estimate is as reflective of the broader US population as we can get. 

And there are some interesting trends here.

Recovery time was longer in the first waves of COVID than in the Omicron wave.

spouupewisladotriuidawochelinisarunothiclitilaswicespushetathilistocritrucifrevachamucabaswaluswobilistudrubriwrothihegethotrurivucrehespivobruchocroswephikuphochikeuoditouidrajanurutrucluspofrecrijabrotachotheswitruc


Recovery times were longer for smokers, those with diabetes, and those who were obese.

wrecroswuslabuphapheshephubophecrophaphupesafruliclicrisuswuwiphetrutushucrolephecinofrauohucrofruwestathistokowrajeshoteclistuwrutribopreuehavo


Recovery times were longer if the disease was more severe, in general. Though there is an unusual finding that women had longer recovery times despite their lower average severity of illness.

168476_image4_web.JPG


Vaccination was associated with shorter recovery times, as you can see here. 

168476_image5_web.JPG


This is all quite interesting. It’s clear that people feel they are sick for a while after COVID. But we need to understand whether these symptoms are due to the lingering effects of a bad infection that knocks you down a peg, or to an ongoing syndrome — this thing we call long COVID — that has a physiologic basis and thus can be treated. And this study doesn’t help us much with that.

Not that this was the authors’ intention. This is a straight-up epidemiology study. But the problem is deeper than that. Let’s imagine that you want to really dig into this long COVID thing and get blood samples from people with it, ideally from controls with some other respiratory virus infection, and do all kinds of genetic and proteomic studies and stuff to really figure out what’s going on. Who do you enroll to be in the long COVID group? Do you enroll anyone who says they had COVID and still has some symptom more than 90 days after? You are going to find an awful lot of eligible people, and I guarantee that if there is a pathognomonic signature of long COVID, not all of them will have it.

And what about other respiratory viruses? This study in The Lancet Infectious Diseases compared long-term outcomes among hospitalized patients with COVID vs influenza. In general, the COVID outcomes are worse, but let’s not knock the concept of “long flu.” Across the board, roughly 50% of people report symptoms across any given organ system.

168476_image6_web.JPG


What this is all about is something called misclassification bias, a form of information bias that arises in a study where you label someone as diseased when they are not, or vice versa. If this happens at random, it’s bad; you’ve lost your ability to distinguish characteristics from the diseased and nondiseased population.

When it’s not random, it’s really bad. If we are more likely to misclassify women as having long COVID, for example, then it will appear that long COVID is more likely among women, or more likely among those with higher estrogen levels, or something. And that might simply be wrong.

I’m not saying that’s what happened here; this study does a really great job of what it set out to do, which was to describe the patterns of lingering symptoms after COVID. But we are not going to make progress toward understanding long COVID until we are less inclusive with our case definition. To paraphrase Syndrome from The Incredibles: If everyone has long COVID, then no one does. 
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity. 

I want to help people suffering from long COVID as much as anyone. But we have a real problem. In brief, we are being too inclusive. The first thing you learn, when you start studying the epidemiology of diseases, is that you need a good case definition. And our case definition for long COVID sucks. Just last week, the National Academies of Sciences, Engineering, and Medicine (NASEM) issued a definition of long COVID with the aim of “improving consistency, documentation, and treatment.” Good news, right? Here’s the definition: “Long COVID is an infection-associated chronic condition that occurs after SARS-CoV-2 infection and is present for at least 3 months as a continuous, relapsing and remitting, or progressive disease state that affects one or more organ systems.” 

This is not helpful. The symptoms can be in any organ system, can be continuous or relapsing and remitting. Basically, if you’ve had COVID — and essentially all of us have by now — and you have any symptom, even one that comes and goes, 3 months after that, it’s long COVID. They don’t even specify that it has to be a new symptom.

I’m not saying that long COVID doesn’t exist. I’m not saying it isn’t weird or that it can’t present in diverse ways. But a case definition like this hinders our ability to figure out exactly what is going on and to identify good treatments. It mixes people with real long COVID with a ton of other people, diluting our power to do science on the condition. And I have sort of a case study in this problem today, based on a paper getting a lot of press suggesting that one out of every five people has long COVID.

We are talking about this study, “Epidemiologic Features of Recovery From SARS-CoV-2 Infection,” appearing in JAMA Network Open this week. While I think the idea is important, the study really highlights why it can be so hard to study long COVID. 

As part of efforts to understand long COVID, the National Institutes of Health (NIH) leveraged 14 of its ongoing cohort studies. The NIH has multiple longitudinal cohort studies that follow various groups of people over time. You may have heard of the REGARDS study, for example, which focuses on cardiovascular risks to people living in the southern United States. Or the ARIC study, which followed adults in four communities across the United States for the development of heart disease. All 14 of the cohorts in this study are long-running projects with ongoing data collection. So, it was not a huge lift to add some questions to the yearly surveys and studies the participants were already getting.

To wit: “Do you think that you have had COVID-19?” and “Would you say that you are completely recovered now?” Those who said they weren’t fully recovered were asked how long it had been since their infection, and anyone who answered with a duration > 90 days was considered to have long COVID.

So, we have self-report of infection, self-report of duration of symptoms, and self-report of recovery. This is fine, of course; individuals’ perceptions of their own health are meaningful. But the vagaries inherent in those perceptions are going to muddy the waters as we attempt to discover the true nature of the long COVID syndrome.

But let’s look at some results. Out of 4708 individuals studied, 842 (17.9%) had not recovered by 90 days.

phubracrocres


This study included not only people hospitalized with COVID, as some prior long COVID studies did, but people self-diagnosed, tested at home, etc. This estimate is as reflective of the broader US population as we can get. 

And there are some interesting trends here.

Recovery time was longer in the first waves of COVID than in the Omicron wave.

spouupewisladotriuidawochelinisarunothiclitilaswicespushetathilistocritrucifrevachamucabaswaluswobilistudrubriwrothihegethotrurivucrehespivobruchocroswephikuphochikeuoditouidrajanurutrucluspofrecrijabrotachotheswitruc


Recovery times were longer for smokers, those with diabetes, and those who were obese.

wrecroswuslabuphapheshephubophecrophaphupesafruliclicrisuswuwiphetrutushucrolephecinofrauohucrofruwestathistokowrajeshoteclistuwrutribopreuehavo


Recovery times were longer if the disease was more severe, in general. Though there is an unusual finding that women had longer recovery times despite their lower average severity of illness.

168476_image4_web.JPG


Vaccination was associated with shorter recovery times, as you can see here. 

168476_image5_web.JPG


This is all quite interesting. It’s clear that people feel they are sick for a while after COVID. But we need to understand whether these symptoms are due to the lingering effects of a bad infection that knocks you down a peg, or to an ongoing syndrome — this thing we call long COVID — that has a physiologic basis and thus can be treated. And this study doesn’t help us much with that.

Not that this was the authors’ intention. This is a straight-up epidemiology study. But the problem is deeper than that. Let’s imagine that you want to really dig into this long COVID thing and get blood samples from people with it, ideally from controls with some other respiratory virus infection, and do all kinds of genetic and proteomic studies and stuff to really figure out what’s going on. Who do you enroll to be in the long COVID group? Do you enroll anyone who says they had COVID and still has some symptom more than 90 days after? You are going to find an awful lot of eligible people, and I guarantee that if there is a pathognomonic signature of long COVID, not all of them will have it.

And what about other respiratory viruses? This study in The Lancet Infectious Diseases compared long-term outcomes among hospitalized patients with COVID vs influenza. In general, the COVID outcomes are worse, but let’s not knock the concept of “long flu.” Across the board, roughly 50% of people report symptoms across any given organ system.

168476_image6_web.JPG


What this is all about is something called misclassification bias, a form of information bias that arises in a study where you label someone as diseased when they are not, or vice versa. If this happens at random, it’s bad; you’ve lost your ability to distinguish characteristics from the diseased and nondiseased population.

When it’s not random, it’s really bad. If we are more likely to misclassify women as having long COVID, for example, then it will appear that long COVID is more likely among women, or more likely among those with higher estrogen levels, or something. And that might simply be wrong.

I’m not saying that’s what happened here; this study does a really great job of what it set out to do, which was to describe the patterns of lingering symptoms after COVID. But we are not going to make progress toward understanding long COVID until we are less inclusive with our case definition. To paraphrase Syndrome from The Incredibles: If everyone has long COVID, then no one does. 
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>168476</fileName> <TBEID>0C050A44.SIG</TBEID> <TBUniqueIdentifier>MD_0C050A44</TBUniqueIdentifier> <newsOrJournal>News</newsOrJournal> <publisherName>Frontline Medical Communications</publisherName> <storyname/> <articleType>2</articleType> <TBLocation>QC Done-All Pubs</TBLocation> <QCDate>20240620T100947</QCDate> <firstPublished>20240620T102113</firstPublished> <LastPublished>20240620T102113</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20240620T102113</CMSDate> <articleSource/> <facebookInfo/> <meetingNumber/> <byline>F. Perry Wilson, MSCE, MD</byline> <bylineText>F. PERRY WILSON, MSCE, MD</bylineText> <bylineFull>F. PERRY WILSON, MSCE, MD</bylineFull> <bylineTitleText/> <USOrGlobal/> <wireDocType/> <newsDocType>News</newsDocType> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:imng"> <name>IMNG Medical Media</name> <rightsInfo> <copyrightHolder> <name>Frontline Medical News</name> </copyrightHolder> <copyrightNotice>Copyright (c) 2015 Frontline Medical News, a Frontline Medical Communications Inc. company. All rights reserved. This material may not be published, broadcast, copied, or otherwise reproduced or distributed without the prior written permission of Frontline Medical Communications Inc.</copyrightNotice> </rightsInfo> </provider> <abstract/> <metaDescription>I’m not saying that long COVID doesn’t exist. I’m not saying it isn’t weird or that it can’t present in diverse ways. But a case definition like this hinders ou</metaDescription> <articlePDF/> <teaserImage>301930</teaserImage> <teaser>Definition of long COVID is too broad, making research and treatment more difficult, says physician.</teaser> <title>Long COVID Can’t Be Solved Until We Decide What It Is</title> <deck/> <disclaimer/> <AuthorList/> <articleURL/> <doi/> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear/> <pubPubdateMonth/> <pubPubdateDay/> <pubVolume/> <pubNumber/> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>chph</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>fp</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>im</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>idprac</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> </publications_g> <publications> <term canonical="true">6</term> <term>15</term> <term>21</term> <term>20</term> </publications> <sections> <term canonical="true">39313</term> <term>52</term> </sections> <topics> <term canonical="true">72046</term> </topics> <links> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012a2f.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Dr. Wilson</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012a30.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">JAMA Network Open</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012a31.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">JAMA Network Open</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012a32.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">JAMA Network Open</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012a33.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">JAMA Network Open</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/24012a34.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">JAMA Network Open</description> </link> </links> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>Long COVID Can’t Be Solved Until We Decide What It Is</title> <deck/> </itemMeta> <itemContent> <p><span class="Emphasis">This transcript has been edited for clarity. <br/><br/></span>I want to help people suffering from long COVID as much as anyone. But we have a real problem. In brief, we are being too inclusive. The first thing you learn, when you start studying the epidemiology of diseases, is that you need a good case definition. And our case definition for long COVID sucks. Just last week, the National Academies of Sciences, Engineering, and Medicine (NASEM) <span class="Hyperlink"><a href="https://www.medscape.com/viewarticle/national-academies-issue-new-broad-definition-long-covid-2024a1000ayc">issued a definition of long COVID</a></span> with the aim of “improving consistency, documentation, and treatment.” Good news, right? Here’s the definition: “Long COVID is an infection-associated chronic condition that occurs after SARS-CoV-2 infection and is present for at least 3 months as a continuous, relapsing and remitting, or progressive disease state that affects one or more organ systems.” </p> <p>This is not helpful. The symptoms can be in any organ system, can be continuous or relapsing and remitting. Basically, if you’ve had COVID — and essentially all of us have by now — and you have any symptom, even one that comes and goes, 3 months after that, it’s long COVID. They don’t even specify that it has to be a new symptom.<br/><br/><span class="tag metaDescription">I’m not saying that long COVID doesn’t exist. I’m not saying it isn’t weird or that it can’t present in diverse ways. But a case definition like this hinders our ability to figure out exactly what is going on and to identify good treatments. It mixes people with real long COVID with a ton of other people, diluting our power to do science on the condition.</span> And I have sort of a case study in this problem today, based on a paper getting a lot of press suggesting that one out of every five people has long COVID.<br/><br/>We are talking about this study, “Epidemiologic Features of Recovery From SARS-CoV-2 Infection,” <a href="http://jamanetwork.com/journals/jamanetworkopen/fullarticle/10.1001/jamanetworkopen.2024.17440?utm_source=For_The_Media&amp;utm_medium=referral&amp;utm_campaign=ftm_links&amp;utm_term=061724"><span class="Hyperlink">appearing in </span><span class="Emphasis">JAMA Network Open</span></a> this week. While I think the idea is important, the study really highlights why it can be so hard to study long COVID. <br/><br/>As part of efforts to understand long COVID, the National Institutes of Health (NIH) leveraged 14 of its ongoing cohort studies. The NIH has multiple longitudinal cohort studies that follow various groups of people over time. You may have heard of the <span class="Hyperlink"><a href="https://www.uab.edu/soph/regardsstudy/">REGARDS</a></span> study, for example, which focuses on cardiovascular risks to people living in the southern United States. Or the <span class="Hyperlink"><a href="https://www.nhlbi.nih.gov/science/atherosclerosis-risk-communities-aric-study">ARIC</a></span> study, which followed adults in four communities across the United States for the development of heart disease. All 14 of the cohorts in this study are long-running projects with ongoing data collection. So, it was not a huge lift to add some questions to the yearly surveys and studies the participants were already getting.<br/><br/>To wit: “Do you think that you have had COVID-19?” and “Would you say that you are completely recovered now?” Those who said they weren’t fully recovered were asked how long it had been since their infection, and anyone who answered with a duration &gt; 90 days was considered to have long COVID.<br/><br/>So, we have self-report of infection, self-report of duration of symptoms, and self-report of recovery. This is fine, of course; individuals’ perceptions of their own health are meaningful. But the vagaries inherent in those perceptions are going to muddy the waters as we attempt to discover the true nature of the long COVID syndrome.<br/><br/>But let’s look at some results. Out of 4708 individuals studied, 842 (17.9%) had not recovered by 90 days.<br/><br/>[[{"fid":"301930","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"Long COVID rates","field_file_image_credit[und][0][value]":"Dr. Wilson","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>This study included not only people hospitalized with COVID, as some prior long COVID studies did, but people self-diagnosed, tested at home, etc. This estimate is as reflective of the broader US population as we can get. <br/><br/>And there are some interesting trends here.<br/><br/>Recovery time was longer in the first waves of COVID than in the Omicron wave.<br/><br/>[[{"fid":"301931","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"variant wave","field_file_image_credit[und][0][value]":"JAMA Network Open","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>Recovery times were longer for smokers, those with diabetes, and those who were <span class="Hyperlink">obese</span>.<br/><br/>[[{"fid":"301932","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"smoking, BMI, diabetes","field_file_image_credit[und][0][value]":"JAMA Network Open","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>Recovery times were longer if the disease was more severe, in general. Though there is an unusual finding that women had longer recovery times despite their lower average severity of illness.<br/><br/>[[{"fid":"301933","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"Recovery times","field_file_image_credit[und][0][value]":"JAMA Network Open","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>Vaccination was associated with shorter recovery times, as you can see here. <br/><br/>[[{"fid":"301934","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"Trends","field_file_image_credit[und][0][value]":"JAMA Network Open","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>This is all quite interesting. It’s clear that people feel they are sick for a while after COVID. But we need to understand whether these symptoms are due to the lingering effects of a bad infection that knocks you down a peg, or to an ongoing syndrome — this thing we call long COVID — that has a physiologic basis and thus can be treated. And this study doesn’t help us much with that.<br/><br/>Not that this was the authors’ intention. This is a straight-up epidemiology study. But the problem is deeper than that. Let’s imagine that you want to really dig into this long COVID thing and get blood samples from people with it, ideally from controls with some other respiratory virus infection, and do all kinds of genetic and proteomic studies and stuff to really figure out what’s going on. Who do you enroll to be in the long COVID group? Do you enroll anyone who says they had COVID and still has some symptom more than 90 days after? You are going to find an awful lot of eligible people, and I guarantee that if there is a pathognomonic signature of long COVID, not all of them will have it.<br/><br/>And what about other respiratory viruses? <span class="Hyperlink"><a href="https://www.sciencedirect.com/science/article/abs/pii/S1473309923006849">This study</a></span> <span class="Hyperlink">in </span><span class="Emphasis">The</span><span class="Hyperlink"> </span><span class="Emphasis">Lancet Infectious Diseases</span> compared long-term outcomes among hospitalized patients with COVID vs <span class="Hyperlink">influenza</span>. In general, the COVID outcomes are worse, but let’s not knock the concept of “long flu.” Across the board, roughly 50% of people report symptoms across any given organ system.<br/><br/>[[{"fid":"301935","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"COVID 19, seasonal flu","field_file_image_credit[und][0][value]":"JAMA Network Open","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>What this is all about is something called misclassification bias, a form of information bias that arises in a study where you label someone as diseased when they are not, or vice versa. If this happens at random, it’s bad; you’ve lost your ability to distinguish characteristics from the diseased and nondiseased population.<br/><br/>When it’s not random, it’s really bad. If we are more likely to misclassify women as having long COVID, for example, then it will appear that long COVID is more likely among women, or more likely among those with higher <span class="Hyperlink">estrogen</span> levels, or something. And that might simply be wrong.<br/><br/>I’m not saying that’s what happened here; this study does a really great job of what it set out to do, which was to describe the patterns of lingering symptoms after COVID. But we are not going to make progress toward understanding long COVID until we are less inclusive with our case definition. To paraphrase Syndrome from <span class="Emphasis">The Incredibles</span>: If everyone has long COVID, then no one does. <br/><br/></p> <p> <em> <span class="Emphasis">Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.</span> </em> </p> <p> <em>A version of this article appeared on <span class="Hyperlink"><a href="https://www.medscape.com/viewarticle/we-wont-solve-long-covid-until-we-decide-what-it-2024a1000b9g">Medscape.com</a></span>.</em> </p> </itemContent> </newsItem> <newsItem> <itemMeta> <itemRole>teaser</itemRole> <itemClass>text</itemClass> <title/> <deck/> </itemMeta> <itemContent> </itemContent> </newsItem> </itemSet></root>
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

In the Future, a Robot Intensivist May Save Your Life

Article Type
Changed
Tue, 06/04/2024 - 11:05

 

This transcript has been edited for clarity

They call it the “golden hour”: 60 minutes, give or take, when the chance to save the life of a trauma victim is at its greatest. If the patient can be resuscitated and stabilized in that time window, they stand a good chance of surviving. If not, well, they don’t.

But resuscitation is complicated. It requires blood products, fluids, vasopressors — all given in precise doses in response to rapidly changing hemodynamics. To do it right takes specialized training, advanced life support (ALS). If the patient is in a remote area or an area without ALS-certified emergency medical services, or is far from the nearest trauma center, that golden hour is lost. And the patient may be as well.

But we live in the future. We have robots in factories, self-driving cars, autonomous drones. Why not an autonomous trauma doctor? If you are in a life-threatening accident, would you want to be treated ... by a robot?

Enter “resuscitation based on functional hemodynamic monitoring,” or “ReFit,” introduced in this article appearing in the journal Intensive Care Medicine Experimental.

The idea behind ReFit is straightforward. Resuscitation after trauma should be based on hitting key hemodynamic targets using the tools we have available in the field: blood, fluids, pressors. The researchers wanted to develop a closed-loop system, something that could be used by minimally trained personnel. The input to the system? Hemodynamic data, provided through a single measurement device, an arterial catheter. The output: blood, fluids, and pressors, delivered intravenously.

The body (a prototype) of the system looks like this. You can see various pumps labeled with various fluids, electronic controllers, and so forth.

jopochajadrubrothochoc


If that’s the body, then this is the brain – a ruggedized laptop interpreting a readout of that arterial catheter.

swislipuruwochibroshirujamidususpacravep


If that’s the brain, then the ReFit algorithm is the mind. The algorithm does its best to leverage all the data it can, so I want to walk through it in a bit of detail.

wrisladoclocrashowrathusajuthajabajithireswudecluslivobrislidipukesosohuclishestucliswowiprishugoswovacuvateshucoslislutrolecropheseclethouefrishospakokouuuidrimaphukunonagokopabranosturecocrovakib


First, check to see whether the patient is stable, defined as a heart rate < 110 beats/min and a mean arterial pressure > 60 mm Hg. If not, you’re off to the races, starting with a bolus of whole blood.

Next, the algorithm gets really interesting. If the patient is still unstable, the computer assesses fluid responsiveness by giving a test dose of fluid and measuring the pulse pressure variation. Greater pulse pressure variation means more fluid responsiveness and the algorithm gives more fluid. Less pulse pressure variation leads the algorithm to uptitrate pressors — in this case, norepinephrine.

This cycle of evaluation and response keeps repeating. The computer titrates fluids and pressors up and down entirely on its own, in theory freeing the human team members to do other things, like getting the patient to a trauma center for definitive care.

So, how do you test whether something like this works? Clearly, you don’t want the trial run of a system like this to be used on a real human suffering from a real traumatic injury. 

Once again, we have animals to thank for research advances — in this case, pigs. Fifteen pigs are described in the study. To simulate a severe, hemorrhagic trauma, they were anesthetized and the liver was lacerated. They were then observed passively until the mean arterial pressure had dropped to below 40 mm Hg.

This is a pretty severe injury. Three unfortunate animals served as controls, two of which died within the 3-hour time window of the study. Eight animals were plugged into the ReFit system. 

For a window into what happens during this process, let’s take a look at the mean arterial pressure and heart rate readouts for one of the animals. You see that the blood pressure starts to fall precipitously after the liver laceration. The heart rate quickly picks up to compensate, raising the mean arterial pressure a bit, but this would be unsustainable with ongoing bleeding.

switapaphilophelasloslephefrupikaswawroludalostebrishichephoprojeclikiveslemefrupistinuwidoch


Here, the ReFit system takes over. Autonomously, the system administers two units of blood, followed by fluids, and then norepinephrine or further fluids per the protocol I described earlier. 

caswitromuducokotreswitrasuteprovadiguna


The practical upshot of all of this is stabilization, despite an as-yet untreated liver laceration. 

Could an experienced ALS provider do this? Of course. But, as I mentioned before, you aren’t always near an experienced ALS provider.

This is all well and good in the lab, but in the real world, you actually need to transport a trauma patient. The researchers tried this also. To prove feasibility, four pigs were taken from the lab to the top of the University of Pittsburgh Medical Center, flown to Allegheny County Airport and back. Total time before liver laceration repair? Three hours. And all four survived. 

It won’t surprise you to hear that this work was funded by the Department of Defense. You can see how a system like this, made a bit more rugged, a bit smaller, and a bit more self-contained could have real uses in the battlefield. But trauma is not unique to war, and something that can extend the time you have to safely transport a patient to definitive care — well, that’s worth its weight in golden hours. 
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

This transcript has been edited for clarity

They call it the “golden hour”: 60 minutes, give or take, when the chance to save the life of a trauma victim is at its greatest. If the patient can be resuscitated and stabilized in that time window, they stand a good chance of surviving. If not, well, they don’t.

But resuscitation is complicated. It requires blood products, fluids, vasopressors — all given in precise doses in response to rapidly changing hemodynamics. To do it right takes specialized training, advanced life support (ALS). If the patient is in a remote area or an area without ALS-certified emergency medical services, or is far from the nearest trauma center, that golden hour is lost. And the patient may be as well.

But we live in the future. We have robots in factories, self-driving cars, autonomous drones. Why not an autonomous trauma doctor? If you are in a life-threatening accident, would you want to be treated ... by a robot?

Enter “resuscitation based on functional hemodynamic monitoring,” or “ReFit,” introduced in this article appearing in the journal Intensive Care Medicine Experimental.

The idea behind ReFit is straightforward. Resuscitation after trauma should be based on hitting key hemodynamic targets using the tools we have available in the field: blood, fluids, pressors. The researchers wanted to develop a closed-loop system, something that could be used by minimally trained personnel. The input to the system? Hemodynamic data, provided through a single measurement device, an arterial catheter. The output: blood, fluids, and pressors, delivered intravenously.

The body (a prototype) of the system looks like this. You can see various pumps labeled with various fluids, electronic controllers, and so forth.

jopochajadrubrothochoc


If that’s the body, then this is the brain – a ruggedized laptop interpreting a readout of that arterial catheter.

swislipuruwochibroshirujamidususpacravep


If that’s the brain, then the ReFit algorithm is the mind. The algorithm does its best to leverage all the data it can, so I want to walk through it in a bit of detail.

wrisladoclocrashowrathusajuthajabajithireswudecluslivobrislidipukesosohuclishestucliswowiprishugoswovacuvateshucoslislutrolecropheseclethouefrishospakokouuuidrimaphukunonagokopabranosturecocrovakib


First, check to see whether the patient is stable, defined as a heart rate < 110 beats/min and a mean arterial pressure > 60 mm Hg. If not, you’re off to the races, starting with a bolus of whole blood.

Next, the algorithm gets really interesting. If the patient is still unstable, the computer assesses fluid responsiveness by giving a test dose of fluid and measuring the pulse pressure variation. Greater pulse pressure variation means more fluid responsiveness and the algorithm gives more fluid. Less pulse pressure variation leads the algorithm to uptitrate pressors — in this case, norepinephrine.

This cycle of evaluation and response keeps repeating. The computer titrates fluids and pressors up and down entirely on its own, in theory freeing the human team members to do other things, like getting the patient to a trauma center for definitive care.

So, how do you test whether something like this works? Clearly, you don’t want the trial run of a system like this to be used on a real human suffering from a real traumatic injury. 

Once again, we have animals to thank for research advances — in this case, pigs. Fifteen pigs are described in the study. To simulate a severe, hemorrhagic trauma, they were anesthetized and the liver was lacerated. They were then observed passively until the mean arterial pressure had dropped to below 40 mm Hg.

This is a pretty severe injury. Three unfortunate animals served as controls, two of which died within the 3-hour time window of the study. Eight animals were plugged into the ReFit system. 

For a window into what happens during this process, let’s take a look at the mean arterial pressure and heart rate readouts for one of the animals. You see that the blood pressure starts to fall precipitously after the liver laceration. The heart rate quickly picks up to compensate, raising the mean arterial pressure a bit, but this would be unsustainable with ongoing bleeding.

switapaphilophelasloslephefrupikaswawroludalostebrishichephoprojeclikiveslemefrupistinuwidoch


Here, the ReFit system takes over. Autonomously, the system administers two units of blood, followed by fluids, and then norepinephrine or further fluids per the protocol I described earlier. 

caswitromuducokotreswitrasuteprovadiguna


The practical upshot of all of this is stabilization, despite an as-yet untreated liver laceration. 

Could an experienced ALS provider do this? Of course. But, as I mentioned before, you aren’t always near an experienced ALS provider.

This is all well and good in the lab, but in the real world, you actually need to transport a trauma patient. The researchers tried this also. To prove feasibility, four pigs were taken from the lab to the top of the University of Pittsburgh Medical Center, flown to Allegheny County Airport and back. Total time before liver laceration repair? Three hours. And all four survived. 

It won’t surprise you to hear that this work was funded by the Department of Defense. You can see how a system like this, made a bit more rugged, a bit smaller, and a bit more self-contained could have real uses in the battlefield. But trauma is not unique to war, and something that can extend the time you have to safely transport a patient to definitive care — well, that’s worth its weight in golden hours. 
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

 

This transcript has been edited for clarity

They call it the “golden hour”: 60 minutes, give or take, when the chance to save the life of a trauma victim is at its greatest. If the patient can be resuscitated and stabilized in that time window, they stand a good chance of surviving. If not, well, they don’t.

But resuscitation is complicated. It requires blood products, fluids, vasopressors — all given in precise doses in response to rapidly changing hemodynamics. To do it right takes specialized training, advanced life support (ALS). If the patient is in a remote area or an area without ALS-certified emergency medical services, or is far from the nearest trauma center, that golden hour is lost. And the patient may be as well.

But we live in the future. We have robots in factories, self-driving cars, autonomous drones. Why not an autonomous trauma doctor? If you are in a life-threatening accident, would you want to be treated ... by a robot?

Enter “resuscitation based on functional hemodynamic monitoring,” or “ReFit,” introduced in this article appearing in the journal Intensive Care Medicine Experimental.

The idea behind ReFit is straightforward. Resuscitation after trauma should be based on hitting key hemodynamic targets using the tools we have available in the field: blood, fluids, pressors. The researchers wanted to develop a closed-loop system, something that could be used by minimally trained personnel. The input to the system? Hemodynamic data, provided through a single measurement device, an arterial catheter. The output: blood, fluids, and pressors, delivered intravenously.

The body (a prototype) of the system looks like this. You can see various pumps labeled with various fluids, electronic controllers, and so forth.

jopochajadrubrothochoc


If that’s the body, then this is the brain – a ruggedized laptop interpreting a readout of that arterial catheter.

swislipuruwochibroshirujamidususpacravep


If that’s the brain, then the ReFit algorithm is the mind. The algorithm does its best to leverage all the data it can, so I want to walk through it in a bit of detail.

wrisladoclocrashowrathusajuthajabajithireswudecluslivobrislidipukesosohuclishestucliswowiprishugoswovacuvateshucoslislutrolecropheseclethouefrishospakokouuuidrimaphukunonagokopabranosturecocrovakib


First, check to see whether the patient is stable, defined as a heart rate < 110 beats/min and a mean arterial pressure > 60 mm Hg. If not, you’re off to the races, starting with a bolus of whole blood.

Next, the algorithm gets really interesting. If the patient is still unstable, the computer assesses fluid responsiveness by giving a test dose of fluid and measuring the pulse pressure variation. Greater pulse pressure variation means more fluid responsiveness and the algorithm gives more fluid. Less pulse pressure variation leads the algorithm to uptitrate pressors — in this case, norepinephrine.

This cycle of evaluation and response keeps repeating. The computer titrates fluids and pressors up and down entirely on its own, in theory freeing the human team members to do other things, like getting the patient to a trauma center for definitive care.

So, how do you test whether something like this works? Clearly, you don’t want the trial run of a system like this to be used on a real human suffering from a real traumatic injury. 

Once again, we have animals to thank for research advances — in this case, pigs. Fifteen pigs are described in the study. To simulate a severe, hemorrhagic trauma, they were anesthetized and the liver was lacerated. They were then observed passively until the mean arterial pressure had dropped to below 40 mm Hg.

This is a pretty severe injury. Three unfortunate animals served as controls, two of which died within the 3-hour time window of the study. Eight animals were plugged into the ReFit system. 

For a window into what happens during this process, let’s take a look at the mean arterial pressure and heart rate readouts for one of the animals. You see that the blood pressure starts to fall precipitously after the liver laceration. The heart rate quickly picks up to compensate, raising the mean arterial pressure a bit, but this would be unsustainable with ongoing bleeding.

switapaphilophelasloslephefrupikaswawroludalostebrishichephoprojeclikiveslemefrupistinuwidoch


Here, the ReFit system takes over. Autonomously, the system administers two units of blood, followed by fluids, and then norepinephrine or further fluids per the protocol I described earlier. 

caswitromuducokotreswitrasuteprovadiguna


The practical upshot of all of this is stabilization, despite an as-yet untreated liver laceration. 

Could an experienced ALS provider do this? Of course. But, as I mentioned before, you aren’t always near an experienced ALS provider.

This is all well and good in the lab, but in the real world, you actually need to transport a trauma patient. The researchers tried this also. To prove feasibility, four pigs were taken from the lab to the top of the University of Pittsburgh Medical Center, flown to Allegheny County Airport and back. Total time before liver laceration repair? Three hours. And all four survived. 

It won’t surprise you to hear that this work was funded by the Department of Defense. You can see how a system like this, made a bit more rugged, a bit smaller, and a bit more self-contained could have real uses in the battlefield. But trauma is not unique to war, and something that can extend the time you have to safely transport a patient to definitive care — well, that’s worth its weight in golden hours. 
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>168246</fileName> <TBEID>0C05059E.SIG</TBEID> <TBUniqueIdentifier>MD_0C05059E</TBUniqueIdentifier> <newsOrJournal>News</newsOrJournal> <publisherName>Frontline Medical Communications</publisherName> <storyname/> <articleType>353</articleType> <TBLocation>QC Done-All Pubs</TBLocation> <QCDate>20240604T104312</QCDate> <firstPublished>20240604T110144</firstPublished> <LastPublished>20240604T110144</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20240604T110144</CMSDate> <articleSource/> <facebookInfo/> <meetingNumber/> <byline>F Perry Wilson</byline> <bylineText>F. PERRY WILSON, MSCE, MD</bylineText> <bylineFull>F. PERRY WILSON, MSCE, MD</bylineFull> <bylineTitleText/> <USOrGlobal/> <wireDocType/> <newsDocType>Opinion</newsDocType> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:imng"> <name>IMNG Medical Media</name> <rightsInfo> <copyrightHolder> <name>Frontline Medical News</name> </copyrightHolder> <copyrightNotice>Copyright (c) 2015 Frontline Medical News, a Frontline Medical Communications Inc. company. All rights reserved. This material may not be published, broadcast, copied, or otherwise reproduced or distributed without the prior written permission of Frontline Medical Communications Inc.</copyrightNotice> </rightsInfo> </provider> <abstract/> <metaDescription>This transcript has been edited for clarity. </metaDescription> <articlePDF/> <teaserImage>301702</teaserImage> <teaser>Dr. Wilson comments on a unique closed-loop system that can resuscitate and stabilize trauma patients during transport to a higher level of care.</teaser> <title>In the Future, a Robot Intensivist May Save Your Life</title> <deck/> <disclaimer/> <AuthorList/> <articleURL/> <doi/> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear/> <pubPubdateMonth/> <pubPubdateDay/> <pubVolume/> <pubNumber/> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>mdemed</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> <journalTitle/> <journalFullTitle/> <copyrightStatement/> </publicationData> <publicationData> <publicationCode>mdsurg</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> <journalTitle/> <journalFullTitle/> <copyrightStatement>2018 Frontline Medical Communications Inc.,</copyrightStatement> </publicationData> </publications_g> <publications> <term canonical="true">58877</term> <term>52226</term> </publications> <sections> <term canonical="true">41022</term> <term>52</term> <term>27970</term> </sections> <topics> <term>308</term> <term canonical="true">288</term> <term>279</term> </topics> <links> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/240129b1.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Nate Langer, UPMC</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/240129b2.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Nate Langer, UPMC</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/240129b3.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Nate Langer, UPMC</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/240129b4.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Intensive Care Medicine Experimental</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/240129b5.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Intensive Care Medicine Experimental</description> </link> </links> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>In the Future, a Robot Intensivist May Save Your Life</title> <deck/> </itemMeta> <itemContent> <p><em>This transcript has been edited for clarity</em>. </p> <p>They call it the “golden hour”: 60 minutes, give or take, when the chance to save the life of a trauma victim is at its greatest. If the patient can be resuscitated and stabilized in that time window, they stand a good chance of surviving. If not, well, they don’t.</p> <p>But resuscitation is complicated. It requires blood products, fluids, <span class="Hyperlink">vasopressors</span> — all given in precise doses in response to rapidly changing hemodynamics. To do it right takes specialized training, advanced life support (ALS). If the patient is in a remote area or an area without ALS-certified emergency medical services, or is far from the nearest trauma center, that golden hour is lost. And the patient may be as well.<br/><br/>But we live in the future. We have robots in factories, self-driving cars, autonomous drones. Why not an autonomous trauma doctor? If you are in a life-threatening accident, would you want to be treated ... by a robot?<br/><br/>Enter “resuscitation based on functional hemodynamic monitoring,” or “ReFit,” introduced in <span class="Hyperlink"><a href="https://icm-experimental.springeropen.com/articles/10.1186/s40635-024-00628-5">this article appearing in the journal</a> </span><em>Intensive Care Medicine Experimental</em><span class="Hyperlink">.<br/><br/></span>The idea behind ReFit is straightforward. Resuscitation after trauma should be based on hitting key hemodynamic targets using the tools we have available in the field: blood, fluids, pressors. The researchers wanted to develop a closed-loop system, something that could be used by minimally trained personnel. The input to the system? Hemodynamic data, provided through a single measurement device, an arterial catheter. The output: blood, fluids, and pressors, delivered intravenously.<br/><br/>The body (a prototype) of the system looks like this. You can see various pumps labeled with various fluids, electronic controllers, and so forth.<br/><br/>[[{"fid":"301702","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"Nate Langer, UPMC","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>If that’s the body, then this is the brain – a ruggedized laptop interpreting a readout of that arterial catheter.<br/><br/>[[{"fid":"301703","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"Nate Langer, UPMC","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>If that’s the brain, then the ReFit algorithm is the mind. The algorithm does its best to leverage all the data it can, so I want to walk through it in a bit of detail.<br/><br/>[[{"fid":"301704","view_mode":"medstat_image_centered","fields":{"format":"medstat_image_centered","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"Nate Langer, UPMC","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_centered"}}]]<br/><br/>First, check to see whether the patient is stable, defined as a heart rate &lt; 110 beats/min and a mean arterial pressure &gt; 60 mm Hg. If not, you’re off to the races, starting with a bolus of <span class="Hyperlink">whole blood</span>.<br/><br/>Next, the algorithm gets really interesting. If the patient is still unstable, the computer assesses fluid responsiveness by giving a test dose of fluid and measuring the pulse pressure variation. Greater pulse pressure variation means more fluid responsiveness and the algorithm gives more fluid. Less pulse pressure variation leads the algorithm to uptitrate pressors — in this case, <span class="Hyperlink">norepinephrine</span>.<br/><br/>This cycle of evaluation and response keeps repeating. The computer titrates fluids and pressors up and down entirely on its own, in theory freeing the human team members to do other things, like getting the patient to a trauma center for definitive care.<br/><br/>So, how do you test whether something like this works? Clearly, you don’t want the trial run of a system like this to be used on a real human suffering from a real traumatic injury. <br/><br/>Once again, we have animals to thank for research advances — in this case, pigs. Fifteen pigs are described in the study. To simulate a severe, hemorrhagic trauma, they were anesthetized and the liver was lacerated. They were then observed passively until the mean arterial pressure had dropped to below 40 mm Hg.<br/><br/>This is a pretty severe injury. Three unfortunate animals served as controls, two of which died within the 3-hour time window of the study. Eight animals were plugged into the ReFit system. <br/><br/>For a window into what happens during this process, let’s take a look at the mean arterial pressure and heart rate readouts for one of the animals. You see that the blood pressure starts to fall precipitously after the liver laceration. The heart rate quickly picks up to compensate, raising the mean arterial pressure a bit, but this would be unsustainable with ongoing bleeding.<br/><br/>[[{"fid":"301705","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"Intensive Care Medicine Experimental","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>Here, the ReFit system takes over. Autonomously, the system administers two units of blood, followed by fluids, and then norepinephrine or further fluids per the protocol I described earlier. <br/><br/>[[{"fid":"301706","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"","field_file_image_credit[und][0][value]":"Intensive Care Medicine Experimental","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>The practical upshot of all of this is stabilization, despite an as-yet untreated liver laceration. <br/><br/>Could an experienced ALS provider do this? Of course. But, as I mentioned before, you aren’t always near an experienced ALS provider.<br/><br/>This is all well and good in the lab, but in the real world, you actually need to transport a trauma patient. The researchers tried this also. To prove feasibility, four pigs were taken from the lab to the top of the University of Pittsburgh Medical Center, flown to Allegheny County Airport and back. Total time before liver laceration repair? Three hours. And all four survived. <br/><br/>It won’t surprise you to hear that this work was funded by the Department of Defense. You can see how a system like this, made a bit more rugged, a bit smaller, and a bit more self-contained could have real uses in the battlefield. But trauma is not unique to war, and something that can extend the time you have to safely transport a patient to definitive care — well, that’s worth its weight in golden hours. <br/><br/></p> <p> <em>Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships. </em> </p> <p> <em>A version of this article appeared on <span class="Hyperlink"><a href="https://www.medscape.com/viewarticle/future-robot-intensivist-may-save-your-life-2024a1000a19">Medscape.com</a></span>.</em> </p> </itemContent> </newsItem> <newsItem> <itemMeta> <itemRole>teaser</itemRole> <itemClass>text</itemClass> <title/> <deck/> </itemMeta> <itemContent> </itemContent> </newsItem> </itemSet></root>
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Fluoride, Water, and Kids’ Brains: It’s Complicated

Article Type
Changed
Thu, 05/23/2024 - 12:33

This transcript has been edited for clarity. 

I recently looked back at my folder full of these medical study commentaries, this weekly video series we call Impact Factor, and realized that I’ve been doing this for a long time. More than 400 articles, believe it or not. 

I’ve learned a lot in that time — about medicine, of course — but also about how people react to certain topics. If you’ve been with me this whole time, or even for just a chunk of it, you’ll know that I tend to take a measured approach to most topics. No one study is ever truly definitive, after all. But regardless of how even-keeled I may be, there are some topics that I just know in advance are going to be a bit divisive: studies about gun control; studies about vitamin D; and, of course, studies about fluoride.
 

Shall We Shake This Hornet’s Nest? 

The fluoridation of the US water system began in 1945 with the goal of reducing cavities in the population. The CDC named water fluoridation one of the 10 great public health achievements of the 20th century, along with such inarguable achievements as the recognition of tobacco as a health hazard.

But fluoridation has never been without its detractors. One problem is that the spectrum of beliefs about the potential harm of fluoridation is huge. On one end, you have science-based concerns such as the recognition that excessive fluoride intake can cause fluorosis and stain tooth enamel. I’ll note that the EPA regulates fluoride levels — there is a fair amount of naturally occurring fluoride in water tables around the world — to prevent this. And, of course, on the other end of the spectrum, you have beliefs that are essentially conspiracy theories: “They” add fluoride to the water supply to control us.

The challenge for me is that when one “side” of a scientific debate includes the crazy theories, it can be hard to discuss that whole spectrum, since there are those who will see evidence of any adverse fluoride effect as confirmation that the conspiracy theory is true. 

I can’t help this. So I’ll just say this up front: I am about to tell you about a study that shows some potential risk from fluoride exposure. I will tell you up front that there are some significant caveats to the study that call the results into question. And I will tell you up front that no one is controlling your mind, or my mind, with fluoride; they do it with social media.
 

Let’s Dive Into These Shark-Infested, Fluoridated Waters

We’re talking about the study, “Maternal Urinary Fluoride and Child Neurobehavior at Age 36 Months,” which appears in JAMA Network Open.

It’s a study of 229 mother-child pairs from the Los Angeles area. The moms had their urinary fluoride level measured once before 30 weeks of gestation. A neurobehavioral battery called the Preschool Child Behavior Checklist was administered to the children at age 36 months. 

The main thing you’ll hear about this study — in headlines, Facebook posts, and manifestos locked in drawers somewhere — is the primary result: A 0.68-mg/L increase in urinary fluoride in the mothers, about 25 percentile points, was associated with a doubling of the risk for neurobehavioral problems in their kids when they were 3 years old.

Yikes.

But this is not a randomized trial. Researchers didn’t randomly assign some women to have high fluoride intake and some women to have low fluoride intake. They knew that other factors that might lead to neurobehavioral problems could also lead to higher fluoride intake. They represent these factors in what’s known as a directed acyclic graph, as seen here, and account for them statistically using a regression equation.

jacruwistibotothusticrejicuclahuhothubanusiwasloslodalojuphutheslekibachucrecatroshobrusawabibrelusuuefriuivispisileslucrajuhaspophogumebonocrokipatrasw


Not represented here are neighborhood characteristics. Los Angeles does not have uniformly fluoridated water, and neurobehavioral problems in kids are strongly linked to stressors in their environments. Fluoride level could be an innocent bystander.

thispathaswirurasleclitrocroclushojiwisloswutriprishinarubrirafragasludulericaspimiswikuuatrabrofrijifruclikiricracrojouumeuuprelajakitripitutoshechikutraspeclaswowruvojedrushostorofrophishicroshidrimuverabrenubreshamithoswuvodrusharashahedaguhow


I’m really just describing the classic issue of correlation versus causation here, the bane of all observational research and — let’s be honest — a bit of a crutch that allows us to disregard the results of studies we don’t like, provided the study wasn’t a randomized trial. 

But I have a deeper issue with this study than the old “failure to adjust for relevant confounders” thing, as important as that is.

The exposure of interest in this study is maternal urinary fluoride, as measured in a spot sample. It’s not often that I get to go deep on nephrology in this space, but let’s think about that for a second. Let’s assume for a moment that fluoride is toxic to the developing fetal brain, the main concern raised by the results of the study. How would that work? Presumably, mom would be ingesting fluoride from various sources (like the water supply), and that fluoride would get into her blood, and from her blood across the placenta to the baby’s blood, and into the baby’s brain.
 

 

 

Is Urinary Fluoride a Good Measure of Blood Fluoride?

It’s not great. Empirically, we have data that tell us that levels of urine fluoride are not all that similar to levels of serum fluoride. In 2014, a study investigated the correlation between urine and serum fluoride in a cohort of 60 schoolchildren and found a correlation coefficient of around 0.5. 

Why isn’t urine fluoride a great proxy for serum fluoride? The most obvious reason is the urine concentration. Human urine concentration can range from about 50 mmol to 1200 mmol (a 24-fold difference) depending on hydration status. Over the course of 24 hours, for example, the amount of fluoride you put out in your urine may be fairly stable in relation to intake, but for a spot urine sample it would be wildly variable. The authors know this, of course, and so they divide the measured urine fluoride by the specific gravity of the urine to give a sort of “dilution adjusted” value. That’s what is actually used in this study. But specific gravity is, itself, an imperfect measure of how dilute the urine is. 

This is something that comes up a lot in urinary biomarker research and it’s not that hard to get around. The best thing would be to just measure blood levels of fluoride. The second best option is 24-hour fluoride excretion. After that, the next best thing would be to adjust the spot concentration by other markers of urinary dilution — creatinine or osmolality — as sensitivity analyses. Any of these approaches would lend credence to the results of the study.

Urinary fluoride excretion is pH dependent. The more acidic the urine, the less fluoride is excreted. Many things — including, importantly, diet — affect urine pH. And it is not a stretch to think that diet may also affect the developing fetus. Neither urine pH nor dietary habits were accounted for in this study. 

So, here we are. We have an observational study suggesting a harm that may be associated with fluoride. There may be a causal link here, in which case we need further studies to weigh the harm against the more well-established public health benefit. Or, this is all correlation — an illusion created by the limitations of observational data, and the unique challenges of estimating intake from a single urine sample. In other words, this study has something for everyone, fluoride boosters and skeptics alike. Let the arguments begin. But, if possible, leave me out of it.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity. 

I recently looked back at my folder full of these medical study commentaries, this weekly video series we call Impact Factor, and realized that I’ve been doing this for a long time. More than 400 articles, believe it or not. 

I’ve learned a lot in that time — about medicine, of course — but also about how people react to certain topics. If you’ve been with me this whole time, or even for just a chunk of it, you’ll know that I tend to take a measured approach to most topics. No one study is ever truly definitive, after all. But regardless of how even-keeled I may be, there are some topics that I just know in advance are going to be a bit divisive: studies about gun control; studies about vitamin D; and, of course, studies about fluoride.
 

Shall We Shake This Hornet’s Nest? 

The fluoridation of the US water system began in 1945 with the goal of reducing cavities in the population. The CDC named water fluoridation one of the 10 great public health achievements of the 20th century, along with such inarguable achievements as the recognition of tobacco as a health hazard.

But fluoridation has never been without its detractors. One problem is that the spectrum of beliefs about the potential harm of fluoridation is huge. On one end, you have science-based concerns such as the recognition that excessive fluoride intake can cause fluorosis and stain tooth enamel. I’ll note that the EPA regulates fluoride levels — there is a fair amount of naturally occurring fluoride in water tables around the world — to prevent this. And, of course, on the other end of the spectrum, you have beliefs that are essentially conspiracy theories: “They” add fluoride to the water supply to control us.

The challenge for me is that when one “side” of a scientific debate includes the crazy theories, it can be hard to discuss that whole spectrum, since there are those who will see evidence of any adverse fluoride effect as confirmation that the conspiracy theory is true. 

I can’t help this. So I’ll just say this up front: I am about to tell you about a study that shows some potential risk from fluoride exposure. I will tell you up front that there are some significant caveats to the study that call the results into question. And I will tell you up front that no one is controlling your mind, or my mind, with fluoride; they do it with social media.
 

Let’s Dive Into These Shark-Infested, Fluoridated Waters

We’re talking about the study, “Maternal Urinary Fluoride and Child Neurobehavior at Age 36 Months,” which appears in JAMA Network Open.

It’s a study of 229 mother-child pairs from the Los Angeles area. The moms had their urinary fluoride level measured once before 30 weeks of gestation. A neurobehavioral battery called the Preschool Child Behavior Checklist was administered to the children at age 36 months. 

The main thing you’ll hear about this study — in headlines, Facebook posts, and manifestos locked in drawers somewhere — is the primary result: A 0.68-mg/L increase in urinary fluoride in the mothers, about 25 percentile points, was associated with a doubling of the risk for neurobehavioral problems in their kids when they were 3 years old.

Yikes.

But this is not a randomized trial. Researchers didn’t randomly assign some women to have high fluoride intake and some women to have low fluoride intake. They knew that other factors that might lead to neurobehavioral problems could also lead to higher fluoride intake. They represent these factors in what’s known as a directed acyclic graph, as seen here, and account for them statistically using a regression equation.

jacruwistibotothusticrejicuclahuhothubanusiwasloslodalojuphutheslekibachucrecatroshobrusawabibrelusuuefriuivispisileslucrajuhaspophogumebonocrokipatrasw


Not represented here are neighborhood characteristics. Los Angeles does not have uniformly fluoridated water, and neurobehavioral problems in kids are strongly linked to stressors in their environments. Fluoride level could be an innocent bystander.

thispathaswirurasleclitrocroclushojiwisloswutriprishinarubrirafragasludulericaspimiswikuuatrabrofrijifruclikiricracrojouumeuuprelajakitripitutoshechikutraspeclaswowruvojedrushostorofrophishicroshidrimuverabrenubreshamithoswuvodrusharashahedaguhow


I’m really just describing the classic issue of correlation versus causation here, the bane of all observational research and — let’s be honest — a bit of a crutch that allows us to disregard the results of studies we don’t like, provided the study wasn’t a randomized trial. 

But I have a deeper issue with this study than the old “failure to adjust for relevant confounders” thing, as important as that is.

The exposure of interest in this study is maternal urinary fluoride, as measured in a spot sample. It’s not often that I get to go deep on nephrology in this space, but let’s think about that for a second. Let’s assume for a moment that fluoride is toxic to the developing fetal brain, the main concern raised by the results of the study. How would that work? Presumably, mom would be ingesting fluoride from various sources (like the water supply), and that fluoride would get into her blood, and from her blood across the placenta to the baby’s blood, and into the baby’s brain.
 

 

 

Is Urinary Fluoride a Good Measure of Blood Fluoride?

It’s not great. Empirically, we have data that tell us that levels of urine fluoride are not all that similar to levels of serum fluoride. In 2014, a study investigated the correlation between urine and serum fluoride in a cohort of 60 schoolchildren and found a correlation coefficient of around 0.5. 

Why isn’t urine fluoride a great proxy for serum fluoride? The most obvious reason is the urine concentration. Human urine concentration can range from about 50 mmol to 1200 mmol (a 24-fold difference) depending on hydration status. Over the course of 24 hours, for example, the amount of fluoride you put out in your urine may be fairly stable in relation to intake, but for a spot urine sample it would be wildly variable. The authors know this, of course, and so they divide the measured urine fluoride by the specific gravity of the urine to give a sort of “dilution adjusted” value. That’s what is actually used in this study. But specific gravity is, itself, an imperfect measure of how dilute the urine is. 

This is something that comes up a lot in urinary biomarker research and it’s not that hard to get around. The best thing would be to just measure blood levels of fluoride. The second best option is 24-hour fluoride excretion. After that, the next best thing would be to adjust the spot concentration by other markers of urinary dilution — creatinine or osmolality — as sensitivity analyses. Any of these approaches would lend credence to the results of the study.

Urinary fluoride excretion is pH dependent. The more acidic the urine, the less fluoride is excreted. Many things — including, importantly, diet — affect urine pH. And it is not a stretch to think that diet may also affect the developing fetus. Neither urine pH nor dietary habits were accounted for in this study. 

So, here we are. We have an observational study suggesting a harm that may be associated with fluoride. There may be a causal link here, in which case we need further studies to weigh the harm against the more well-established public health benefit. Or, this is all correlation — an illusion created by the limitations of observational data, and the unique challenges of estimating intake from a single urine sample. In other words, this study has something for everyone, fluoride boosters and skeptics alike. Let the arguments begin. But, if possible, leave me out of it.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity. 

I recently looked back at my folder full of these medical study commentaries, this weekly video series we call Impact Factor, and realized that I’ve been doing this for a long time. More than 400 articles, believe it or not. 

I’ve learned a lot in that time — about medicine, of course — but also about how people react to certain topics. If you’ve been with me this whole time, or even for just a chunk of it, you’ll know that I tend to take a measured approach to most topics. No one study is ever truly definitive, after all. But regardless of how even-keeled I may be, there are some topics that I just know in advance are going to be a bit divisive: studies about gun control; studies about vitamin D; and, of course, studies about fluoride.
 

Shall We Shake This Hornet’s Nest? 

The fluoridation of the US water system began in 1945 with the goal of reducing cavities in the population. The CDC named water fluoridation one of the 10 great public health achievements of the 20th century, along with such inarguable achievements as the recognition of tobacco as a health hazard.

But fluoridation has never been without its detractors. One problem is that the spectrum of beliefs about the potential harm of fluoridation is huge. On one end, you have science-based concerns such as the recognition that excessive fluoride intake can cause fluorosis and stain tooth enamel. I’ll note that the EPA regulates fluoride levels — there is a fair amount of naturally occurring fluoride in water tables around the world — to prevent this. And, of course, on the other end of the spectrum, you have beliefs that are essentially conspiracy theories: “They” add fluoride to the water supply to control us.

The challenge for me is that when one “side” of a scientific debate includes the crazy theories, it can be hard to discuss that whole spectrum, since there are those who will see evidence of any adverse fluoride effect as confirmation that the conspiracy theory is true. 

I can’t help this. So I’ll just say this up front: I am about to tell you about a study that shows some potential risk from fluoride exposure. I will tell you up front that there are some significant caveats to the study that call the results into question. And I will tell you up front that no one is controlling your mind, or my mind, with fluoride; they do it with social media.
 

Let’s Dive Into These Shark-Infested, Fluoridated Waters

We’re talking about the study, “Maternal Urinary Fluoride and Child Neurobehavior at Age 36 Months,” which appears in JAMA Network Open.

It’s a study of 229 mother-child pairs from the Los Angeles area. The moms had their urinary fluoride level measured once before 30 weeks of gestation. A neurobehavioral battery called the Preschool Child Behavior Checklist was administered to the children at age 36 months. 

The main thing you’ll hear about this study — in headlines, Facebook posts, and manifestos locked in drawers somewhere — is the primary result: A 0.68-mg/L increase in urinary fluoride in the mothers, about 25 percentile points, was associated with a doubling of the risk for neurobehavioral problems in their kids when they were 3 years old.

Yikes.

But this is not a randomized trial. Researchers didn’t randomly assign some women to have high fluoride intake and some women to have low fluoride intake. They knew that other factors that might lead to neurobehavioral problems could also lead to higher fluoride intake. They represent these factors in what’s known as a directed acyclic graph, as seen here, and account for them statistically using a regression equation.

jacruwistibotothusticrejicuclahuhothubanusiwasloslodalojuphutheslekibachucrecatroshobrusawabibrelusuuefriuivispisileslucrajuhaspophogumebonocrokipatrasw


Not represented here are neighborhood characteristics. Los Angeles does not have uniformly fluoridated water, and neurobehavioral problems in kids are strongly linked to stressors in their environments. Fluoride level could be an innocent bystander.

thispathaswirurasleclitrocroclushojiwisloswutriprishinarubrirafragasludulericaspimiswikuuatrabrofrijifruclikiricracrojouumeuuprelajakitripitutoshechikutraspeclaswowruvojedrushostorofrophishicroshidrimuverabrenubreshamithoswuvodrusharashahedaguhow


I’m really just describing the classic issue of correlation versus causation here, the bane of all observational research and — let’s be honest — a bit of a crutch that allows us to disregard the results of studies we don’t like, provided the study wasn’t a randomized trial. 

But I have a deeper issue with this study than the old “failure to adjust for relevant confounders” thing, as important as that is.

The exposure of interest in this study is maternal urinary fluoride, as measured in a spot sample. It’s not often that I get to go deep on nephrology in this space, but let’s think about that for a second. Let’s assume for a moment that fluoride is toxic to the developing fetal brain, the main concern raised by the results of the study. How would that work? Presumably, mom would be ingesting fluoride from various sources (like the water supply), and that fluoride would get into her blood, and from her blood across the placenta to the baby’s blood, and into the baby’s brain.
 

 

 

Is Urinary Fluoride a Good Measure of Blood Fluoride?

It’s not great. Empirically, we have data that tell us that levels of urine fluoride are not all that similar to levels of serum fluoride. In 2014, a study investigated the correlation between urine and serum fluoride in a cohort of 60 schoolchildren and found a correlation coefficient of around 0.5. 

Why isn’t urine fluoride a great proxy for serum fluoride? The most obvious reason is the urine concentration. Human urine concentration can range from about 50 mmol to 1200 mmol (a 24-fold difference) depending on hydration status. Over the course of 24 hours, for example, the amount of fluoride you put out in your urine may be fairly stable in relation to intake, but for a spot urine sample it would be wildly variable. The authors know this, of course, and so they divide the measured urine fluoride by the specific gravity of the urine to give a sort of “dilution adjusted” value. That’s what is actually used in this study. But specific gravity is, itself, an imperfect measure of how dilute the urine is. 

This is something that comes up a lot in urinary biomarker research and it’s not that hard to get around. The best thing would be to just measure blood levels of fluoride. The second best option is 24-hour fluoride excretion. After that, the next best thing would be to adjust the spot concentration by other markers of urinary dilution — creatinine or osmolality — as sensitivity analyses. Any of these approaches would lend credence to the results of the study.

Urinary fluoride excretion is pH dependent. The more acidic the urine, the less fluoride is excreted. Many things — including, importantly, diet — affect urine pH. And it is not a stretch to think that diet may also affect the developing fetus. Neither urine pH nor dietary habits were accounted for in this study. 

So, here we are. We have an observational study suggesting a harm that may be associated with fluoride. There may be a causal link here, in which case we need further studies to weigh the harm against the more well-established public health benefit. Or, this is all correlation — an illusion created by the limitations of observational data, and the unique challenges of estimating intake from a single urine sample. In other words, this study has something for everyone, fluoride boosters and skeptics alike. Let the arguments begin. But, if possible, leave me out of it.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>168148</fileName> <TBEID>0C050356.SIG</TBEID> <TBUniqueIdentifier>MD_0C050356</TBUniqueIdentifier> <newsOrJournal>News</newsOrJournal> <publisherName>Frontline Medical Communications</publisherName> <storyname>Floride, Water, Kids Brains</storyname> <articleType>2</articleType> <TBLocation>QC Done-All Pubs</TBLocation> <QCDate>20240523T115516</QCDate> <firstPublished>20240523T123005</firstPublished> <LastPublished>20240523T123005</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20240523T123005</CMSDate> <articleSource/> <facebookInfo/> <meetingNumber/> <byline>F. Perry Wilson, MSCE, MD</byline> <bylineText>F. PERRY WILSON, MSCE, MD</bylineText> <bylineFull>F. PERRY WILSON, MSCE, MD</bylineFull> <bylineTitleText/> <USOrGlobal/> <wireDocType/> <newsDocType>News</newsDocType> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:imng"> <name>IMNG Medical Media</name> <rightsInfo> <copyrightHolder> <name>Frontline Medical News</name> </copyrightHolder> <copyrightNotice>Copyright (c) 2015 Frontline Medical News, a Frontline Medical Communications Inc. company. All rights reserved. This material may not be published, broadcast, copied, or otherwise reproduced or distributed without the prior written permission of Frontline Medical Communications Inc.</copyrightNotice> </rightsInfo> </provider> <abstract/> <metaDescription>Dr. Perry discusses the strengths and weaknesses of a recent study that shows some potential risk from fluoride exposure.</metaDescription> <articlePDF/> <teaserImage>301517</teaserImage> <teaser> <span class="tag metaDescription">Dr. Perry discusses the strengths and weaknesses of a recent study that shows some potential risk from fluoride exposure.</span> </teaser> <title>Fluoride, Water, and Kids’ Brains: It’s Complicated</title> <deck/> <disclaimer/> <AuthorList/> <articleURL/> <doi/> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear>2024</pubPubdateYear> <pubPubdateMonth/> <pubPubdateDay/> <pubVolume/> <pubNumber/> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>FP</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> <journalTitle/> <journalFullTitle/> <copyrightStatement>Copyright 2017 Frontline Medical News</copyrightStatement> </publicationData> <publicationData> <publicationCode>IM</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> <journalTitle/> <journalFullTitle/> <copyrightStatement/> </publicationData> <publicationData> <publicationCode>nr</publicationCode> <pubIssueName>January 2021</pubIssueName> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> <journalTitle>Neurology Reviews</journalTitle> <journalFullTitle>Neurology Reviews</journalFullTitle> <copyrightStatement>2018 Frontline Medical Communications Inc.,</copyrightStatement> </publicationData> <publicationData> <publicationCode>PN</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> <journalTitle/> <journalFullTitle/> <copyrightStatement/> </publicationData> <publicationData> <publicationCode>ob</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> </publications_g> <publications> <term>15</term> <term>21</term> <term>22</term> <term canonical="true">25</term> <term>23</term> </publications> <sections> <term canonical="true">52</term> <term>39313</term> <term>41022</term> </sections> <topics> <term>258</term> <term>271</term> <term canonical="true">280</term> <term>262</term> </topics> <links> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/2401296b.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Jama Network Open</description> </link> <link> <itemClass qcode="ninat:picture"/> <altRep contenttype="image/jpeg">images/2401296c.jpg</altRep> <description role="drol:caption"/> <description role="drol:credit">Los Angeles County Department of Public Health</description> </link> </links> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>Fluoride, Water, and Kids’ Brains: It’s Complicated</title> <deck/> </itemMeta> <itemContent> <p> <span class="Emphasis">This transcript has been edited for clarity. </span> </p> <p>I recently looked back at my folder full of these medical study commentaries, this weekly video series we call<span class="Emphasis"> Impact Factor</span>, and realized that I’ve been doing this for a long time. More than 400 articles, believe it or not. <br/><br/>I’ve learned a lot in that time — about medicine, of course — but also about how people react to certain topics. If you’ve been with me this whole time, or even for just a chunk of it, you’ll know that I tend to take a measured approach to most topics. No one study is ever truly definitive, after all. But regardless of how even-keeled I may be, there are some topics that I just know in advance are going to be a bit divisive: studies about gun control; studies about vitamin D; and, of course, studies about fluoride.<br/><br/></p> <h2>Shall We Shake This Hornet’s Nest? </h2> <p>The fluoridation of the US water system began in 1945 with the goal of reducing cavities in the population. The CDC named water fluoridation one of the <span class="Hyperlink"><a href="https://www.cdc.gov/mmwr/preview/mmwrhtml/00056796.htm">10 great public health achievements of the 20th century</a></span>, along with such inarguable achievements as the recognition of tobacco as a health hazard.</p> <p>But fluoridation has never been without its detractors. One problem is that the spectrum of beliefs about the potential harm of fluoridation is huge. On one end, you have science-based concerns such as the recognition that excessive fluoride intake can cause fluorosis and stain tooth enamel. I’ll note that the EPA regulates fluoride levels — there is a fair amount of naturally occurring fluoride in water tables around the world — to prevent this. And, of course, on the other end of the spectrum, you have beliefs that are essentially conspiracy theories: “They” add fluoride to the water supply to control us.<br/><br/>The challenge for me is that when one “side” of a scientific debate includes the crazy theories, it can be hard to discuss that whole spectrum, since there are those who will see evidence of any adverse fluoride effect as confirmation that the conspiracy theory is true. <br/><br/>I can’t help this. So I’ll just say this up front: I am about to tell you about a study that shows some potential risk from fluoride exposure. I will tell you up front that there are some significant caveats to the study that call the results into question. And I will tell you up front that no one is controlling your mind, or my mind, with fluoride; they do it with social media.<br/><br/></p> <h2>Let’s Dive Into These Shark-Infested, Fluoridated Waters</h2> <p>We’re talking about the study, <span class="Hyperlink"><a href="https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2818858">“Maternal Urinary Fluoride and Child Neurobehavior at Age 36 Months,”</a></span> which appears in <span class="Emphasis">JAMA Network Open</span>.</p> <p>It’s a study of 229 mother-child pairs from the Los Angeles area. The moms had their urinary fluoride level measured once before 30 weeks of gestation. A neurobehavioral battery called the Preschool Child Behavior Checklist was administered to the children at age 36 months. <br/><br/>The main thing you’ll hear about this study — in headlines, Facebook posts, and manifestos locked in drawers somewhere — is the primary result: A 0.68-mg/L increase in urinary fluoride in the mothers, about 25 percentile points, was associated with a doubling of the risk for neurobehavioral problems in their kids when they were 3 years old.<br/><br/>Yikes.<br/><br/>But this is not a randomized trial. Researchers didn’t randomly assign some women to have high fluoride intake and some women to have low fluoride intake. They knew that other factors that might lead to neurobehavioral problems could also lead to higher fluoride intake. They represent these factors in what’s known as a directed acyclic graph, as seen here, and account for them statistically using a regression equation.<br/><br/>[[{"fid":"301517","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"Factors that might lead to neurobehavioral problems","field_file_image_credit[und][0][value]":"Jama Network Open","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>Not represented here are neighborhood characteristics. <span class="Hyperlink"><a href="http://publichealth.lacounty.gov/ohp/docs/Water%20Fluoridation%202019%20with%20Key.pdf">Los Angeles does not have uniformly fluoridated water</a></span>, and neurobehavioral problems in kids are strongly linked to stressors in their environments. Fluoride level could be an innocent bystander.<br/><br/>[[{"fid":"301518","view_mode":"medstat_image_full_text","fields":{"format":"medstat_image_full_text","field_file_image_alt_text[und][0][value]":"Los Angeles County Status of Community Water Fluoridation Oral Health Program, 2019","field_file_image_credit[und][0][value]":"Los Angeles County Department of Public Health","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_full_text"}}]]<br/><br/>I’m really just describing the classic issue of correlation versus causation here, the bane of all observational research and — let’s be honest — a bit of a crutch that allows us to disregard the results of studies we don’t like, provided the study wasn’t a randomized trial. <br/><br/>But I have a deeper issue with this study than the old “failure to adjust for relevant confounders” thing, as important as that is.<br/><br/>The exposure of interest in this study is maternal urinary fluoride, as measured in a spot sample. It’s not often that I get to go deep on nephrology in this space, but let’s think about that for a second. Let’s assume for a moment that fluoride is toxic to the developing fetal brain, the main concern raised by the results of the study. How would that work? Presumably, mom would be ingesting fluoride from various sources (like the water supply), and that fluoride would get into her blood, and from her blood across the placenta to the baby’s blood, and into the baby’s brain.<br/><br/></p> <h2>Is Urinary Fluoride a Good Measure of Blood Fluoride?</h2> <p>It’s not great. Empirically, we have data that tell us that levels of urine fluoride are not all that similar to levels of serum fluoride. In 2014, a <span class="Hyperlink"><a href="https://springerplus.springeropen.com/articles/10.1186/2193-1801-3-7">study</a></span> investigated the correlation between urine and serum fluoride in a cohort of 60 schoolchildren and found a correlation coefficient of around 0.5. </p> <p>Why isn’t urine fluoride a great proxy for serum fluoride? The most obvious reason is the urine concentration. Human urine concentration can range from about 50 mmol to 1200 mmol (a 24-fold difference) depending on hydration status. Over the course of 24 hours, for example, the amount of fluoride you put out in your urine may be fairly stable in relation to intake, but for a spot urine sample it would be wildly variable. The authors know this, of course, and so they divide the measured urine fluoride by the specific gravity of the urine to give a sort of “dilution adjusted” value. That’s what is actually used in this study. But specific gravity is, itself, an imperfect measure of how dilute the urine is. <br/><br/>This is something that comes up a <span class="Emphasis">lot</span> in urinary biomarker research and it’s not that hard to get around. The best thing would be to just measure blood levels of fluoride. The second best option is 24-hour fluoride excretion. After that, the next best thing would be to adjust the spot concentration by other markers of urinary dilution — creatinine or osmolality — as sensitivity analyses. Any of these approaches would lend credence to the results of the study.<br/><br/>Urinary fluoride excretion is pH dependent. The more acidic the urine, the less fluoride is excreted. Many things — including, importantly, diet — affect urine pH. And it is not a stretch to think that diet may also affect the developing fetus. Neither urine pH nor dietary habits were accounted for in this study. <br/><br/>So, here we are. We have an observational study suggesting a harm that may be associated with fluoride. There may be a causal link here, in which case we need further studies to weigh the harm against the more well-established public health benefit. Or, this is all correlation — an illusion created by the limitations of observational data, and the unique challenges of estimating intake from a single urine sample. In other words, this study has something for everyone, fluoride boosters and skeptics alike. Let the arguments begin. But, if possible, leave me out of it.<br/><br/></p> <p> <em>Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships. </em> </p> <p> <em>A version of this article appeared on <span class="Hyperlink"><a href="https://www.medscape.com/viewarticle/fluoride-water-and-kids-brains-its-complicated-2024a10009iq">Medscape.com</a></span>.</em> </p> </itemContent> </newsItem> <newsItem> <itemMeta> <itemRole>teaser</itemRole> <itemClass>text</itemClass> <title/> <deck/> </itemMeta> <itemContent> </itemContent> </newsItem> </itemSet></root>
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article