Article Type
Changed
Thu, 05/16/2024 - 15:04

Large language models (LLM) such as ChatGPT have shown mixed results in the quality of their responses to consumer questions about cancer.

One recent study found AI chatbots to churn out incomplete, inaccurate, or even nonsensical cancer treatment recommendations, while another found them to generate largely accurate — if technical — responses to the most common cancer questions.

While researchers have seen success with purpose-built chatbots created to address patient concerns about specific cancers, the consensus to date has been that the generalized models like ChatGPT remain works in progress and that physicians should avoid pointing patients to them, for now.

Yet new findings suggest that these chatbots may do better than individual physicians, at least on some measures, when it comes to answering queries about cancer. For research published May 16 in JAMA Oncology (doi: 10.1001/jamaoncol.2024.0836), David Chen, a medical student at the University of Toronto, and his colleagues, isolated a random sample of 200 questions related to cancer care addressed to doctors on the public online forum Reddit. They then compared responses from oncologists with responses generated by three different AI chatbots. The blinded responses were rated for quality, readability, and empathy by six physicians, including oncologists and palliative and supportive care specialists.

Mr. Chen and colleagues’ research was modeled after a 2023 study that measured the quality of physician responses compared with chatbots for general medicine questions addressed to doctors on Reddit. That study found that the chatbots produced more empathetic-sounding answers, something Mr. Chen’s study also found. The best-performing chatbot in Mr. Chen and colleagues’ study, Claude AI, performed significantly higher than the Reddit physicians on all the domains evaluated: quality, empathy, and readability.
 

Q&A With Author of New Research

Mr. Chen discussed his new study’s implications during an interview with this news organization.

Question: What is novel about this study?

Mr. Chen: We’ve seen many evaluations of chatbots that test for medical accuracy, but this study occurs in the domain of oncology care, where there are unique psychosocial and emotional considerations that are not precisely reflected in a general medicine setting. In effect, this study is putting these chatbots through a harder challenge.



Question: Why would chatbot responses seem more empathetic than those of physicians?

Mr. Chen: With the physician responses that we observed in our sample data set, we saw that there was very high variation of amount of apparent effort [in the physician responses]. Some physicians would put in a lot of time and effort, thinking through their response, and others wouldn’t do so as much. These chatbots don’t face fatigue the way humans do, or burnout. So they’re able to consistently provide responses with less variation in empathy.



Question: Do chatbots just seem empathetic because they are chattier?

Mr. Chen: We did think of verbosity as a potential confounder in this study. So we set a word count limit for the chatbot responses to keep it in the range of the physician responses. That way, verbosity was no longer a significant factor.



Question: How were quality and empathy measured by the reviewers?

Mr. Chen: For our study we used two teams of readers, each team composed of three physicians. In terms of the actual metrics we used, they were pilot metrics. There are no well-defined measurement scales or checklists that we could use to measure empathy. This is an emerging field of research. So we came up by consensus with our own set of ratings, and we feel that this is an area for the research to define a standardized set of guidelines.

Another novel aspect of this study is that we separated out different dimensions of quality and empathy. A quality response didn’t just mean it was medically accurate — quality also had to do with the focus and completeness of the response.

With empathy there are cognitive and emotional dimensions. Cognitive empathy uses critical thinking to understand the person’s emotions and thoughts and then adjusting a response to fit that. A patient may not want the best medically indicated treatment for their condition, because they want to preserve their quality of life. The chatbot may be able to adjust its recommendation with consideration of some of those humanistic elements that the patient is presenting with.

Emotional empathy is more about being supportive of the patient’s emotions by using expressions like ‘I understand where you’re coming from.’ or, ‘I can see how that makes you feel.’



Question: Why would physicians, not patients, be the best evaluators of empathy?

Mr. Chen: We’re actually very interested in evaluating patient ratings of empathy. We are conducting a follow-up study that evaluates patient ratings of empathy to the same set of chatbot and physician responses,to see if there are differences.



Question: Should cancer patients go ahead and consult chatbots?

Mr. Chen: Although we did observe increases in all of the metrics compared with physicians, this is a very specialized evaluation scenario where we’re using these Reddit questions and responses.

Naturally, we would need to do a trial, a head to head randomized comparison of physicians versus chatbots.

This pilot study does highlight the promising potential of these chatbots to suggest responses. But we can’t fully recommend that they should be used as standalone clinical tools without physicians.

This Q&A was edited for clarity.

Publications
Topics
Sections

Large language models (LLM) such as ChatGPT have shown mixed results in the quality of their responses to consumer questions about cancer.

One recent study found AI chatbots to churn out incomplete, inaccurate, or even nonsensical cancer treatment recommendations, while another found them to generate largely accurate — if technical — responses to the most common cancer questions.

While researchers have seen success with purpose-built chatbots created to address patient concerns about specific cancers, the consensus to date has been that the generalized models like ChatGPT remain works in progress and that physicians should avoid pointing patients to them, for now.

Yet new findings suggest that these chatbots may do better than individual physicians, at least on some measures, when it comes to answering queries about cancer. For research published May 16 in JAMA Oncology (doi: 10.1001/jamaoncol.2024.0836), David Chen, a medical student at the University of Toronto, and his colleagues, isolated a random sample of 200 questions related to cancer care addressed to doctors on the public online forum Reddit. They then compared responses from oncologists with responses generated by three different AI chatbots. The blinded responses were rated for quality, readability, and empathy by six physicians, including oncologists and palliative and supportive care specialists.

Mr. Chen and colleagues’ research was modeled after a 2023 study that measured the quality of physician responses compared with chatbots for general medicine questions addressed to doctors on Reddit. That study found that the chatbots produced more empathetic-sounding answers, something Mr. Chen’s study also found. The best-performing chatbot in Mr. Chen and colleagues’ study, Claude AI, performed significantly higher than the Reddit physicians on all the domains evaluated: quality, empathy, and readability.
 

Q&A With Author of New Research

Mr. Chen discussed his new study’s implications during an interview with this news organization.

Question: What is novel about this study?

Mr. Chen: We’ve seen many evaluations of chatbots that test for medical accuracy, but this study occurs in the domain of oncology care, where there are unique psychosocial and emotional considerations that are not precisely reflected in a general medicine setting. In effect, this study is putting these chatbots through a harder challenge.



Question: Why would chatbot responses seem more empathetic than those of physicians?

Mr. Chen: With the physician responses that we observed in our sample data set, we saw that there was very high variation of amount of apparent effort [in the physician responses]. Some physicians would put in a lot of time and effort, thinking through their response, and others wouldn’t do so as much. These chatbots don’t face fatigue the way humans do, or burnout. So they’re able to consistently provide responses with less variation in empathy.



Question: Do chatbots just seem empathetic because they are chattier?

Mr. Chen: We did think of verbosity as a potential confounder in this study. So we set a word count limit for the chatbot responses to keep it in the range of the physician responses. That way, verbosity was no longer a significant factor.



Question: How were quality and empathy measured by the reviewers?

Mr. Chen: For our study we used two teams of readers, each team composed of three physicians. In terms of the actual metrics we used, they were pilot metrics. There are no well-defined measurement scales or checklists that we could use to measure empathy. This is an emerging field of research. So we came up by consensus with our own set of ratings, and we feel that this is an area for the research to define a standardized set of guidelines.

Another novel aspect of this study is that we separated out different dimensions of quality and empathy. A quality response didn’t just mean it was medically accurate — quality also had to do with the focus and completeness of the response.

With empathy there are cognitive and emotional dimensions. Cognitive empathy uses critical thinking to understand the person’s emotions and thoughts and then adjusting a response to fit that. A patient may not want the best medically indicated treatment for their condition, because they want to preserve their quality of life. The chatbot may be able to adjust its recommendation with consideration of some of those humanistic elements that the patient is presenting with.

Emotional empathy is more about being supportive of the patient’s emotions by using expressions like ‘I understand where you’re coming from.’ or, ‘I can see how that makes you feel.’



Question: Why would physicians, not patients, be the best evaluators of empathy?

Mr. Chen: We’re actually very interested in evaluating patient ratings of empathy. We are conducting a follow-up study that evaluates patient ratings of empathy to the same set of chatbot and physician responses,to see if there are differences.



Question: Should cancer patients go ahead and consult chatbots?

Mr. Chen: Although we did observe increases in all of the metrics compared with physicians, this is a very specialized evaluation scenario where we’re using these Reddit questions and responses.

Naturally, we would need to do a trial, a head to head randomized comparison of physicians versus chatbots.

This pilot study does highlight the promising potential of these chatbots to suggest responses. But we can’t fully recommend that they should be used as standalone clinical tools without physicians.

This Q&A was edited for clarity.

Large language models (LLM) such as ChatGPT have shown mixed results in the quality of their responses to consumer questions about cancer.

One recent study found AI chatbots to churn out incomplete, inaccurate, or even nonsensical cancer treatment recommendations, while another found them to generate largely accurate — if technical — responses to the most common cancer questions.

While researchers have seen success with purpose-built chatbots created to address patient concerns about specific cancers, the consensus to date has been that the generalized models like ChatGPT remain works in progress and that physicians should avoid pointing patients to them, for now.

Yet new findings suggest that these chatbots may do better than individual physicians, at least on some measures, when it comes to answering queries about cancer. For research published May 16 in JAMA Oncology (doi: 10.1001/jamaoncol.2024.0836), David Chen, a medical student at the University of Toronto, and his colleagues, isolated a random sample of 200 questions related to cancer care addressed to doctors on the public online forum Reddit. They then compared responses from oncologists with responses generated by three different AI chatbots. The blinded responses were rated for quality, readability, and empathy by six physicians, including oncologists and palliative and supportive care specialists.

Mr. Chen and colleagues’ research was modeled after a 2023 study that measured the quality of physician responses compared with chatbots for general medicine questions addressed to doctors on Reddit. That study found that the chatbots produced more empathetic-sounding answers, something Mr. Chen’s study also found. The best-performing chatbot in Mr. Chen and colleagues’ study, Claude AI, performed significantly higher than the Reddit physicians on all the domains evaluated: quality, empathy, and readability.
 

Q&A With Author of New Research

Mr. Chen discussed his new study’s implications during an interview with this news organization.

Question: What is novel about this study?

Mr. Chen: We’ve seen many evaluations of chatbots that test for medical accuracy, but this study occurs in the domain of oncology care, where there are unique psychosocial and emotional considerations that are not precisely reflected in a general medicine setting. In effect, this study is putting these chatbots through a harder challenge.



Question: Why would chatbot responses seem more empathetic than those of physicians?

Mr. Chen: With the physician responses that we observed in our sample data set, we saw that there was very high variation of amount of apparent effort [in the physician responses]. Some physicians would put in a lot of time and effort, thinking through their response, and others wouldn’t do so as much. These chatbots don’t face fatigue the way humans do, or burnout. So they’re able to consistently provide responses with less variation in empathy.



Question: Do chatbots just seem empathetic because they are chattier?

Mr. Chen: We did think of verbosity as a potential confounder in this study. So we set a word count limit for the chatbot responses to keep it in the range of the physician responses. That way, verbosity was no longer a significant factor.



Question: How were quality and empathy measured by the reviewers?

Mr. Chen: For our study we used two teams of readers, each team composed of three physicians. In terms of the actual metrics we used, they were pilot metrics. There are no well-defined measurement scales or checklists that we could use to measure empathy. This is an emerging field of research. So we came up by consensus with our own set of ratings, and we feel that this is an area for the research to define a standardized set of guidelines.

Another novel aspect of this study is that we separated out different dimensions of quality and empathy. A quality response didn’t just mean it was medically accurate — quality also had to do with the focus and completeness of the response.

With empathy there are cognitive and emotional dimensions. Cognitive empathy uses critical thinking to understand the person’s emotions and thoughts and then adjusting a response to fit that. A patient may not want the best medically indicated treatment for their condition, because they want to preserve their quality of life. The chatbot may be able to adjust its recommendation with consideration of some of those humanistic elements that the patient is presenting with.

Emotional empathy is more about being supportive of the patient’s emotions by using expressions like ‘I understand where you’re coming from.’ or, ‘I can see how that makes you feel.’



Question: Why would physicians, not patients, be the best evaluators of empathy?

Mr. Chen: We’re actually very interested in evaluating patient ratings of empathy. We are conducting a follow-up study that evaluates patient ratings of empathy to the same set of chatbot and physician responses,to see if there are differences.



Question: Should cancer patients go ahead and consult chatbots?

Mr. Chen: Although we did observe increases in all of the metrics compared with physicians, this is a very specialized evaluation scenario where we’re using these Reddit questions and responses.

Naturally, we would need to do a trial, a head to head randomized comparison of physicians versus chatbots.

This pilot study does highlight the promising potential of these chatbots to suggest responses. But we can’t fully recommend that they should be used as standalone clinical tools without physicians.

This Q&A was edited for clarity.

Publications
Publications
Topics
Article Type
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>168102</fileName> <TBEID>0C05023E.SIG</TBEID> <TBUniqueIdentifier>MD_0C05023E</TBUniqueIdentifier> <newsOrJournal>News</newsOrJournal> <publisherName>Frontline Medical Communications</publisherName> <storyname/> <articleType>2</articleType> <TBLocation>QC Done-All Pubs</TBLocation> <QCDate>20240516T144501</QCDate> <firstPublished>20240516T150050</firstPublished> <LastPublished>20240516T150050</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20240516T150050</CMSDate> <articleSource>FROM JAMA ONCOLOGY </articleSource> <facebookInfo/> <meetingNumber/> <byline>Jennie Smith</byline> <bylineText>JENNIE SMITH</bylineText> <bylineFull>JENNIE SMITH</bylineFull> <bylineTitleText>MDedge News</bylineTitleText> <USOrGlobal/> <wireDocType/> <newsDocType>Feature</newsDocType> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:imng"> <name>IMNG Medical Media</name> <rightsInfo> <copyrightHolder> <name>Frontline Medical News</name> </copyrightHolder> <copyrightNotice>Copyright (c) 2015 Frontline Medical News, a Frontline Medical Communications Inc. company. All rights reserved. This material may not be published, broadcast, copied, or otherwise reproduced or distributed without the prior written permission of Frontline Medical Communications Inc.</copyrightNotice> </rightsInfo> </provider> <abstract/> <metaDescription>The best-performing chatbot in Mr. Chen and colleagues’ study, Claude AI, performed significantly higher than the Reddit physicians on all the domains evaluated</metaDescription> <articlePDF/> <teaserImage/> <teaser>AI gives readable, high quality, and empathetic responses to social media queries about cancer.</teaser> <title>Chatbots Seem More Empathetic Than Docs in Cancer Discussions</title> <deck/> <disclaimer/> <AuthorList/> <articleURL/> <doi/> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear/> <pubPubdateMonth/> <pubPubdateDay/> <pubVolume/> <pubNumber/> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>oncr</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>chph</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>endo</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>skin</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>fp</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>hemn</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>idprac</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>im</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>rn</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>ob</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> <publicationData> <publicationCode>GIHOLD</publicationCode> <pubIssueName>January 2014</pubIssueName> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> <journalTitle/> <journalFullTitle/> <copyrightStatement/> </publicationData> </publications_g> <publications> <term canonical="true">31</term> <term>6</term> <term>34</term> <term>13</term> <term>15</term> <term>18</term> <term>20</term> <term>21</term> <term>26</term> <term>23</term> </publications> <sections> <term canonical="true">27980</term> <term>39313</term> <term>27970</term> </sections> <topics> <term canonical="true">270</term> <term>278</term> <term>280</term> <term>31848</term> <term>292</term> <term>192</term> <term>198</term> <term>61821</term> <term>59244</term> <term>67020</term> <term>214</term> <term>217</term> <term>221</term> <term>238</term> <term>240</term> <term>242</term> <term>244</term> <term>39570</term> <term>27442</term> <term>245</term> <term>256</term> <term>271</term> <term>38029</term> <term>210</term> <term>263</term> <term>179</term> <term>181</term> <term>178</term> <term>59374</term> <term>196</term> <term>197</term> <term>37637</term> <term>233</term> <term>243</term> <term>250</term> <term>49434</term> <term>303</term> </topics> <links/> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>Chatbots Seem More Empathetic Than Docs in Cancer Discussions</title> <deck/> </itemMeta> <itemContent> <p>Large language models (LLM) such as ChatGPT have shown mixed results in the quality of their responses to consumer questions about cancer. </p> <p>One recent study found AI chatbots to churn out incomplete, inaccurate, or even <span class="Hyperlink"><a href="https://jamanetwork.com/journals/jamaoncology/fullarticle/2808731">nonsensical cancer treatment recommendations</a></span>, while <span class="Hyperlink"><a href="https://jamanetwork.com/journals/jamaoncology/article-abstract/2808733">another</a></span> found them to generate largely accurate — if technical — responses to the most common cancer questions.<br/><br/>While researchers have seen success with <span class="Hyperlink"><a href="https://www.sciencedirect.com/science/article/pii/S0738399121006364">purpose-built chatbots</a></span> created to address patient concerns about <span class="Hyperlink"><a href="https://pubmed.ncbi.nlm.nih.gov/37152238/">specific cancers</a></span>, the consensus to date has been that the generalized models like ChatGPT remain works in progress and that physicians should avoid pointing patients to them, for now. <br/><br/>Yet new findings suggest that these chatbots may do better than individual physicians, at least on some measures, when it comes to answering queries about cancer. For research published May 16 in <em>JAMA Oncology</em> (<span class="Hyperlink"><a href="https://jamanetwork.com/journals/jamaoncology/fullarticle/2818765">doi: 10.1001/jamaoncol.2024.0836</a></span>), David Chen, a medical student at the University of Toronto, and his colleagues, isolated a random sample of 200 questions related to cancer care addressed to doctors on the public online forum Reddit. They then compared responses from oncologists with responses generated by three different AI chatbots. The blinded responses were rated for quality, readability, and empathy by six physicians, including oncologists and palliative and supportive care specialists. <br/><br/>Mr. Chen and colleagues’ research was modeled after <span class="Hyperlink"><a href="https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2804309">a 2023 study</a></span> that measured the quality of physician responses compared with chatbots for general medicine questions addressed to doctors on Reddit. That study found that the chatbots produced more empathetic-sounding answers, something Mr. Chen’s study also found. <span class="tag metaDescription">The best-performing chatbot in Mr. Chen and colleagues’ study, <span class="Hyperlink"><a href="https://claude.ai/login?returnTo=%2F%3F">Claude AI</a></span>, performed significantly higher than the Reddit physicians on all the domains evaluated</span>: quality, empathy, and readability. <br/><br/></p> <h2>Q&amp;A With Author of New Research</h2> <p>Mr. Chen discussed his new study’s implications during an interview with this news organization. </p> <p><strong>Question:</strong> What is novel about this study? <br/><br/><strong>Mr. Chen:</strong> We’ve seen many evaluations of chatbots that test for medical accuracy, but this study occurs in the domain of oncology care, where there are unique psychosocial and emotional considerations that are not precisely reflected in a general medicine setting. In effect, this study is putting these chatbots through a harder challenge. <br/><br/><br/><br/><strong>Question:</strong> Why would chatbot responses seem more empathetic than those of physicians?<br/><br/><strong>Mr. Chen:</strong> With the physician responses that we observed in our sample data set, we saw that there was very high variation of amount of apparent effort [in the physician responses]. Some physicians would put in a lot of time and effort, thinking through their response, and others wouldn’t do so as much. These chatbots don’t face fatigue the way humans do, or burnout. So they’re able to consistently provide responses with less variation in empathy. <br/><br/><br/><br/><strong>Question:</strong> Do chatbots just seem empathetic because they are chattier? <br/><br/><strong>Mr. Chen:</strong> We did think of verbosity as a potential confounder in this study. So we set a word count limit for the chatbot responses to keep it in the range of the physician responses. That way, verbosity was no longer a significant factor.<br/><br/><br/><br/><strong>Question:</strong> How were quality and empathy measured by the reviewers? <br/><br/><strong>Mr. Chen:</strong> For our study we used two teams of readers, each team composed of three physicians. In terms of the actual metrics we used, they were pilot metrics. There are no well-defined measurement scales or checklists that we could use to measure empathy. This is an emerging field of research. So we came up by consensus with our own set of ratings, and we feel that this is an area for the research to define a standardized set of guidelines. <br/><br/>Another novel aspect of this study is that we separated out different dimensions of quality and empathy. A quality response didn’t just mean it was medically accurate — quality also had to do with the focus and completeness of the response.<br/><br/>With empathy there are cognitive and emotional dimensions. Cognitive empathy uses critical thinking to understand the person’s emotions and thoughts and then adjusting a response to fit that. A patient may not want the best medically indicated treatment for their condition, because they want to preserve their quality of life. The chatbot may be able to adjust its recommendation with consideration of some of those humanistic elements that the patient is presenting with.<br/><br/>Emotional empathy is more about being supportive of the patient’s emotions by using expressions like ‘I understand where you’re coming from.’ or, ‘I can see how that makes you feel.’ <br/><br/><br/><br/><strong>Question:</strong> Why would physicians, not patients, be the best evaluators of empathy?<br/><br/><strong>Mr. Chen:</strong> We’re actually very interested in evaluating patient ratings of empathy. We are conducting a follow-up study that evaluates patient ratings of empathy to the same set of chatbot and physician responses,to see if there are differences.<br/><br/><br/><br/><strong>Question:</strong> Should cancer patients go ahead and consult chatbots?<br/><br/><strong>Mr. Chen:</strong> Although we did observe increases in all of the metrics compared with physicians, this is a very specialized evaluation scenario where we’re using these Reddit questions and responses.<br/><br/>Naturally, we would need to do a trial, a head to head randomized comparison of physicians versus chatbots. <br/><br/>This pilot study does highlight the promising potential of these chatbots to suggest responses. But we can’t fully recommend that they should be used as standalone clinical tools without physicians.<br/><br/>This Q&amp;A was edited for clarity.</p> </itemContent> </newsItem> <newsItem> <itemMeta> <itemRole>teaser</itemRole> <itemClass>text</itemClass> <title/> <deck/> </itemMeta> <itemContent> </itemContent> </newsItem> </itemSet></root>
Article Source

FROM JAMA ONCOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article