An FP’s guide to AI-enabled clinical decision support

Article Type
Changed
Tue, 05/03/2022 - 15:12
Display Headline
An FP’s guide to AI-enabled clinical decision support

Computer technology and artificial intelligence (AI) have come a long way in several decades:

  • Between 1971 and 1996, access to the Medline database was primarily limited to university libraries and other institutions; in 1997, the database became universally available online as PubMed.1
  • In 2004, the President of the United States issued an executive order that launched a 10-year plan to put electronic health records (EHRs) in place nationwide; EHRs are now employed in nearly 9 of 10 (85.9%) medical offices.2

Over time, numerous online resources sprouted as well, including DxPlain, UpToDate, and Clinical Key, to name a few. These digital tools were impressive for their time, but many of them are now considered “old-school” AI-enabled clinical decision support.

In the past 2 to 3 years, innovative clinicians and technologists have pushed medicine into a new era that takes advantage of machine learning (ML)-enhanced diagnostic aids, software systems that predict disease progression, and advanced clinical pathways to help individualize treatment. Enthusiastic early adopters believe these resources are transforming patient care—although skeptics remain unconvinced, cautioning that they have yet to prove their worth in everyday clinical practice.

In this review, we first analyze the strengths and weaknesses of evidence supporting these tools, then propose a potential role for them in family medicine.

Machine learning takes on retinopathy

The term “artificial intelligence” has been with us for longer than a half century.3 In the broadest sense, AI refers to any computer system capable of automating a process usually performed manually by humans. But the latest innovations in AI take advantage of a subset of AI called “machine learning”: the ability of software systems to learn new functionality or insights on their own, without additional programming from human data engineers. Case in point: A software platform has been developed that is capable of diagnosing or screening for diabetic retinopathy without the involvement of an experienced ophthalmologist.

A software platform has been developed that is capable of diagnosing or screening for diabetic retinopathy without the involvement of an experienced ophthalmologist.

The landmark study that started clinicians and health care executives thinking seriously about the potential role of ML in medical practice was spearheaded by ­Varun Gulshan, PhD, at Google, and associates from several medical schools.4 Gulshan used an artificial neural network designed to mimic the functions of the human nervous system to analyze more than 128,000 retinal images, looking for evidence of diabetic retinopathy. (See “Deciphering artificial neural networks,” for an explanation of how such networks function.5) The algorithm they employed was compared with the diagnostic skills of several board-certified ophthalmologists.

[polldaddy:10453606]

Continue to: Deciperhing artificial neural networks

 

 

SIDEBAR
Deciphering artificial neural networks

The promise of health care information technology relies heavily on statistical methods and software constructs, including logistic regression, random forest modeling, clustering, and neural networks. The machine learning-enabled image analysis used to detect diabetic retinopathy and to differentiate a malignant melanoma and a normal mole is based on neural networking.

As we discussed in the body of this article, these networks mimic the nervous system, in that they comprise computer-generated “neurons,” or nodes, and are connected by “synapses” (FIGURE5). When a node in Layer 1 is excited by pixels coming from a scanned image, it sends on that excitement, represented by a numerical value, to a second set of nodes in Layer 2, which, in turns, sends signals to the next layer— and so on.

Eventually, the software’s interpretation of the pixels of the image reaches the output layer of the network, generating a negative or positive diagnosis. The initial process results in many interpretations, which are corrected by a backward analytic process called backpropagation. The video tutorials mentioned in the main text provide a more detailed explanation of neural networking.

How does a neural network operate?

 

Using an area-under-the-receiver operating curve (AUROC) as a metric, and choosing an operating point for high specificity, the algorithm generated sensitivity of 87% and 90.3% and specificity of 98.1% and 98.5% for 2 validation data sets for detecting referable retinopathy, as defined by a panel of at least 7 ophthalmologists. When AUROC was set for high sensitivity, the algorithm generated sensitivity of 97.5% and 96.1% and specificity of 93.4% and 93.9% for the 2 data sets.

These results are impressive, but the researchers used a retrospective approach in their analysis. A prospective analysis would provide stronger evidence.

That shortcoming was addressed by a pivotal clinical trial that convinced the US Food and Drug Administration (FDA) to approve the technology. Michael Abramoff, MD, PhD, at the University of Iowa Department of Ophthalmology and Visual Sciences and his associates6 conducted a prospective study that compared the gold standard for detecting retinopathy, the Fundus Photograph Reading Center (of the University of Wisconsin School of Medicine and Public Health), to an ML-based algorithm, the commercialized IDx-DR. The IDx-DR is a software system that is used in combination with a fundal camera to capture retinal images. The researchers found that “the AI system exceeded all pre-specified superiority endpoints at sensitivity of 87.2% ... [and] specificity of 90.7% ....”

Continue to: The FDA clearance statement...

 

 

The FDA clearance statement for this technology7 limits its use, emphasizing that it is intended only as a screening tool, not a stand-alone diagnostic system. Because ­IDx-DR is being used in primary care, the FDA states that patients who have a positive result should be referred to an eye care professional. The technology is contraindicated in patients who have a history of laser treatment, surgery, or injection in the eye or who have any of the following: persistent vision loss, blurred vision, floaters, previously diagnosed macular edema, severe nonproliferative retinopathy, proliferative retinopathy, radiation retinopathy, and retinal vein occlusion. It is also not intended for pregnant patients because their eye disease often progresses rapidly.

A large-scale validation study performed on data from Kaiser Permanente Northwest found that it is possible to estimate a person's risk of colorectal cancer by using age, gender, and complete blood count.

Additional caveats to keep in mind when evaluating this new technology include that, although the software can help detect retinopathy, it does not address other key issues for this patient population, including cataracts and glaucoma. The cost of the new technology also requires attention: Software must be used in conjunction with a specific retinal camera, the Topcon TRC-NW400, which is expensive (new, as much as $20,000).

Eye with artificial intelligence
IMAGE: ©GETTY IMAGES

Speaking of cost: Health care providers and insurers still question whether implementing AI-enabled systems is cost-­effective. It is too early to say definitively how AI and machine learning will have an impact on health care expenditures, because the most promising technological systems have yet to be fully implemented in hospitals and medical practices nationwide. Projections by Forbes suggest that private investment in health care AI will reach $6.6 billion by 2021; on a more confident note, an Accenture analysis predicts that the best possible application of AI might save the health care sector $150 billion annually by 2026.8

What role might this diabetic retinopathy technology play in family medicine? Physicians are constantly advising patients who have diabetes about the need to have a regular ophthalmic examination to check for early signs of retinopathy—advice that is often ignored. The American Academy of Ophthalmology points out that “6 out of 10 people with diabetes skip a sight-saving exam.”9 When a patient is screened with this type of device and found to be at high risk of eye disease, however, the advice to see an eye-care specialist might carry more weight.

Screening colonoscopy: Improving patient incentives

No responsible physician doubts the value of screening colonoscopy in patients 50 years and older, but many patients have yet to realize that the procedure just might save their life. Is there a way to incentivize resistant patients to have a colonoscopy performed? An ML-based software system that only requires access to a few readily available parameters might be the needed impetus for many patients.

Continue to: A large-scale validation...

 

 

A large-scale validation study performed on data from Kaiser Permanente Northwest found that it is possible to estimate a person’s risk of colorectal cancer by using age, gender, and complete blood count.10 This retrospective investigation analyzed more than 17,000 Kaiser Permanente patients, including 900 who already had colorectal cancer. The analysis generated a risk score for patients who did not have the malignancy to gauge their likelihood of developing it. The algorithms were more sensitive for detecting tumors of the cecum and ascending colon, and less sensitive for detection of tumors of the transverse and sigmoid colon and rectum.

To provide more definitive evidence to support the value of the software platform, a prospective study was subsequently conducted on more than 79,000 patients who had initially declined to undergo colorectal screening. The platform, called ColonFlag, was used to detect 688 patients at highest risk, who were then offered screening colonoscopy. In this subgroup, 254 agreed to the procedure; ColonFlag identified 19 malignancies (7.5%) among patients within the Maccabi Health System (Israel), and 15 more in patients outside that health system.11 (In the United States, the same program is known as LGI Flag and has been cleared by the FDA.)

Although ColonFlag has the potential to reduce the incidence of colorectal cancer, other evidence-based screening modalities are highlighted in US Preventive Services Task Force guidelines, including the guaiac-based fecal occult blood test and the fecal immunochemical test.12

 

Beyond screening to applications in managing disease

The complex etiology of sepsis makes the condition difficult to treat. That complexity has also led to disagreement on the best course of management. Using an ML algorithm called an “Artificial Intelligence Clinician,” Komorowski and associates13 extracted data from a large data set from 2 nonoverlapping intensive care unit databases collected from US adults.The researchers’ analysis suggested a list of 48 variables that likely influence sepsis outcomes, including:

  • demographics,
  • Elixhauser premorbid status,
  • vital signs,
  • clinical laboratory data,
  • intravenous fluids given, and
  • vasopressors administered.

Komorowski and co-workers concluded that “… mortality was lowest in patients for whom clinicians’ actual doses matched the AI decisions. Our model provides individualized and clinically interpretable treatment decisions for sepsis that could improve patient outcomes.”

A randomized clinical trial has found that an ML program that uses only 6 common clinical markers—blood pressure, heart rate, temperature, respiratory rate, peripheral capillary oxygen saturation (SpO2), and age—can improve clinical outcomes in patients with severe sepsis.14 The alerts generated by the algorithm were used to guide treatment. Average length of stay was 13 days in controls, compared with 10.3 days in those evaluated with the ML algorithm. The algorithm was also associated with a 12.4% drop in in-­hospital mortality.

Continue to: Addressing challenges, tapping resources

 

 

Addressing challenges, tapping resources

Advances in the management of diabetic retinopathy, colorectal cancer, and sepsis are the tip of the AI iceberg. There are now ML programs to distinguish melanoma from benign nevi; to improve insulin dosing for patients with type 1 diabetes; to predict which hospital patients are most likely to end up in the intensive care unit; and to mitigate the opioid epidemic.

An ML Web page on the JAMA Network (https://sites.jamanetwork.com/machine-learning/) features a long list of published research studies, reviews, and opinion papers suggesting that the future of medicine is closely tied to innovative developments in this area. This Web page also addresses the potential use of ML in detecting lymph node metastases in breast cancer, the need to temper AI with human intelligence, the role of AI in clinical decision support, and more.

The JAMA Network also discusses a few of the challenges that still need to be overcome in developing ML tools for clinical medicine—challenges that you will want to be cognizant of as you evaluate new research in the field.

Black-box dilemma. A challenge that technologists face as they introduce new programs that have the potential to improve diagnosis, treatment, and prognosis is a phenomenon called the “black-box dilemma,” which refers to the complex data science, advanced statistics, and mathematical equations that underpin ML algorithms. These complexities make it difficult to explain the mechanism of action upon which software is based, which, in turn, makes many clinicians skeptical about its worth.

A randomized clinical trial has found that an ML program that uses only 6 common clinical markers can improve clinical outcomes in patients with severe sepsis.

For example, the neural networks that are the backbone of the retinopathy algorithm discussed earlier might seem like voodoo science to those unfamiliar with the technology. It’s fortunate that several technology-savvy physicians have mastered these digital tools and have the teaching skills to explain them in plain-English tutorials. One such tutorial, “Understanding How Machine Learning Works,” is posted on the JAMA Network (https://sites.­jamanetwork.com/machine-learning/#multimedia). A more basic explanation was included in a recent Public Broadcasting System “Nova” episode, viewable at www.youtube.com/watch?v=xS2G0oolHpo.

Continue to: Limited analysis

 

 

Limited analysis. Another problem that plagues many ML-based algorithms is that they have been tested on only a single data set. (Typically, a data set refers to a collection of clinical parameters from a patient population.) For example, researchers developing an algorithm might collect their data from a single health care system.

Several investigators have addressed this shortcoming by testing their software on 2 completely independent patient populations. Banda and colleagues15 recently developed a software platform to improve the detection rate in familial hypercholesterolemia, a significant cause of premature cardiovascular disease and death that affects approximately 1 of every 250 people. Despite the urgency of identifying the disorder and providing potentially lifesaving treatment, only 10% of patients receive an accurate diagnosis.16 Banda and colleagues developed a deep-learning algorithm that is far more effective than the traditional screening approach now in use.

To address the generalizability of the algorithm, it was tested on EHR data from 2 independent health care systems: Stanford Health Care and Geisinger Health System. In Stanford patients, the positive predictive value of the algorithm was 88%, with a sensitivity of 75%; it identified 84% of affected patients at the highest probability threshold. In Geisinger patients, the classifier generated a positive predictive value of 85%.

The future of these technologies

AI and ML are not panaceas that will revolutionize medicine in the near future. Likewise, the digital tools discussed in this article are not going to solve multiple complex medical problems addressed during a single office visit. But physicians who ignore mounting evidence that supports these emerging technologies will be left behind by more forward-thinking colleagues.

The best possible application of AI might save the health care sector $150 billion annually by 2026, according to an economic analysis.

A recent commentary in Gastroenterology17 sums up the situation best: “It is now too conservative to suggest that CADe [computer-assisted detection] and CADx [computer-assisted diagnosis] carry the potential to revolutionize colonoscopy. The artificial intelligence revolution has already begun.”

CORRESPONDENCE
Paul Cerrato, MA, cerrato@aol.com, pcerrato@optonline.net. John Halamka, MD, MS, john.halamka@bilh.org.

References

1. Lindberg DA. Internet access to National Library of Medicine. Eff Clin Pract. 2000;3:256-260.

2. National Center for Health Statistics, Centers for Disease Control and Prevention. Electronic medical records/electronic health records (EMRs/EHRs). www.cdc.gov/nchs/fastats/electronic­-medical-records.htm. Updated March 31, 2017. Accessed October 1, 2019.

3. Smith C, McGuire B, Huang T, et al. The history of artificial intelligence. University of Washington. https://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf. Published December 2006. Accessed October 1, 2019.

4. Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA; 2016;316:2402-2410.

5. Cerrato P, Halamka J. The Transformative Power of Mobile Medicine. Cambridge, MA: Academic Press; 2019.

6. Abràmoff MD, Lavin PT, Birch M, et al. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digit Med. 2018;1:39.

7. US Food and Drug Administration. FDA permits marketing of artificial intelligence-based device to detect certain diabetes-related eye problems. Press release. www.fda.gov/news-events/press-announcements/fda-permits-marketing-artificial-­intelligence-based-device-detect-certain-diabetes-related-eye. Published April 11, 2018. Accessed October 1, 2019.

8. AI and healthcare: a giant opportunity. Forbes Web site. www.forbes.com/sites/insights-intelai/2019/02/11/ai-and-healthcare-a-giant-opportunity/#5906c4014c68. Published February 11, 2019. Accessed October 25, 2019.

9. Boyd K. Six out of 10 people with diabetes skip a sight-saving exam. American Academy of Ophthalmology Website. https://www.aao.org/eye-health/news/sixty-percent-skip-diabetic-eye-exams. Published November 1, 2016. Accessed October 25, 2019.

10. Hornbrook MC, Goshen R, Choman E, et al. Early colorectal cancer detected by machine learning model using gender, age, and complete blood count data. Dig Dis Sci. 2017;62:2719-2727.

11. Goshen R, Choman E, Ran A, et al. Computer-assisted flagging of individuals at high risk of colorectal cancer in a large health maintenance organization using the ColonFlag test. JCO Clin Cancer Inform. 2018;2:1-8.

12. US Preventive Services Task Force. Final recommendation statement: colorectal cancer: screening. www.uspreventiveservicestaskforce.org/Page/Document/RecommendationStatementFinal/colorectal-cancer-screening2#tab. Published May 2019. Accessed October 1, 2019.

13. Komorowski M, Celi LA, Badawi O, et al. The artificial intelligence clinician learns optimal treatment strategies for sepsis in intensive care. Nat Med. 2018;24:1716-1720.

14. Shimabukuro DW, Barton CW, Feldman MD, et al. Effect of a machine learning-based severe sepsis prediction algorithm on patient survival and hospital length of stay: a randomised clinical trial. BMJ Open Respir Res. 2017;4:e000234.

15. Banda J, Sarraju A, Abbasi F, et al. Finding missed cases of familial hypercholesterolemia in health systems using machine learning. NPJ Digit Med. 2019;2:23.

16. What is familial hypercholesterolemia? FH Foundation Web site. https://thefhfoundation.org/familial-hypercholesterolemia/what-is-familial-hypercholesterolemia. Accessed November 1, 2019.

17. Byrne MF, Shahidi N, Rex DK. Will computer-aided detection and diagnosis revolutionize colonoscopy? Gastroenterology. 2017;153:1460-1464.E1.

Article PDF
Author and Disclosure Information

Harvard Medical School, Boston, Mass, and New England Healthcare Exchange Network (Dr. Halamka); Beth Israel Deaconess Medical Center, New York, NY, and Warwick, NY (Mr. Cerrato; affiliated independent medical journalist). Dr. Halamka and Mr. Cerrato are coauthors of Realizing the Promise of Precision Medicine and The Transformative Power of Mobile Medicine.
cerrato@aol.com, pcerrato@optonline.net

The authors reported no potential conflict of interest relevant to this article.

Issue
The Journal of Family Practice - 68(9)
Publications
Topics
Page Number
486-488,490-492
Sections
Author and Disclosure Information

Harvard Medical School, Boston, Mass, and New England Healthcare Exchange Network (Dr. Halamka); Beth Israel Deaconess Medical Center, New York, NY, and Warwick, NY (Mr. Cerrato; affiliated independent medical journalist). Dr. Halamka and Mr. Cerrato are coauthors of Realizing the Promise of Precision Medicine and The Transformative Power of Mobile Medicine.
cerrato@aol.com, pcerrato@optonline.net

The authors reported no potential conflict of interest relevant to this article.

Author and Disclosure Information

Harvard Medical School, Boston, Mass, and New England Healthcare Exchange Network (Dr. Halamka); Beth Israel Deaconess Medical Center, New York, NY, and Warwick, NY (Mr. Cerrato; affiliated independent medical journalist). Dr. Halamka and Mr. Cerrato are coauthors of Realizing the Promise of Precision Medicine and The Transformative Power of Mobile Medicine.
cerrato@aol.com, pcerrato@optonline.net

The authors reported no potential conflict of interest relevant to this article.

Article PDF
Article PDF

Computer technology and artificial intelligence (AI) have come a long way in several decades:

  • Between 1971 and 1996, access to the Medline database was primarily limited to university libraries and other institutions; in 1997, the database became universally available online as PubMed.1
  • In 2004, the President of the United States issued an executive order that launched a 10-year plan to put electronic health records (EHRs) in place nationwide; EHRs are now employed in nearly 9 of 10 (85.9%) medical offices.2

Over time, numerous online resources sprouted as well, including DxPlain, UpToDate, and Clinical Key, to name a few. These digital tools were impressive for their time, but many of them are now considered “old-school” AI-enabled clinical decision support.

In the past 2 to 3 years, innovative clinicians and technologists have pushed medicine into a new era that takes advantage of machine learning (ML)-enhanced diagnostic aids, software systems that predict disease progression, and advanced clinical pathways to help individualize treatment. Enthusiastic early adopters believe these resources are transforming patient care—although skeptics remain unconvinced, cautioning that they have yet to prove their worth in everyday clinical practice.

In this review, we first analyze the strengths and weaknesses of evidence supporting these tools, then propose a potential role for them in family medicine.

Machine learning takes on retinopathy

The term “artificial intelligence” has been with us for longer than a half century.3 In the broadest sense, AI refers to any computer system capable of automating a process usually performed manually by humans. But the latest innovations in AI take advantage of a subset of AI called “machine learning”: the ability of software systems to learn new functionality or insights on their own, without additional programming from human data engineers. Case in point: A software platform has been developed that is capable of diagnosing or screening for diabetic retinopathy without the involvement of an experienced ophthalmologist.

A software platform has been developed that is capable of diagnosing or screening for diabetic retinopathy without the involvement of an experienced ophthalmologist.

The landmark study that started clinicians and health care executives thinking seriously about the potential role of ML in medical practice was spearheaded by ­Varun Gulshan, PhD, at Google, and associates from several medical schools.4 Gulshan used an artificial neural network designed to mimic the functions of the human nervous system to analyze more than 128,000 retinal images, looking for evidence of diabetic retinopathy. (See “Deciphering artificial neural networks,” for an explanation of how such networks function.5) The algorithm they employed was compared with the diagnostic skills of several board-certified ophthalmologists.

[polldaddy:10453606]

Continue to: Deciperhing artificial neural networks

 

 

SIDEBAR
Deciphering artificial neural networks

The promise of health care information technology relies heavily on statistical methods and software constructs, including logistic regression, random forest modeling, clustering, and neural networks. The machine learning-enabled image analysis used to detect diabetic retinopathy and to differentiate a malignant melanoma and a normal mole is based on neural networking.

As we discussed in the body of this article, these networks mimic the nervous system, in that they comprise computer-generated “neurons,” or nodes, and are connected by “synapses” (FIGURE5). When a node in Layer 1 is excited by pixels coming from a scanned image, it sends on that excitement, represented by a numerical value, to a second set of nodes in Layer 2, which, in turns, sends signals to the next layer— and so on.

Eventually, the software’s interpretation of the pixels of the image reaches the output layer of the network, generating a negative or positive diagnosis. The initial process results in many interpretations, which are corrected by a backward analytic process called backpropagation. The video tutorials mentioned in the main text provide a more detailed explanation of neural networking.

How does a neural network operate?

 

Using an area-under-the-receiver operating curve (AUROC) as a metric, and choosing an operating point for high specificity, the algorithm generated sensitivity of 87% and 90.3% and specificity of 98.1% and 98.5% for 2 validation data sets for detecting referable retinopathy, as defined by a panel of at least 7 ophthalmologists. When AUROC was set for high sensitivity, the algorithm generated sensitivity of 97.5% and 96.1% and specificity of 93.4% and 93.9% for the 2 data sets.

These results are impressive, but the researchers used a retrospective approach in their analysis. A prospective analysis would provide stronger evidence.

That shortcoming was addressed by a pivotal clinical trial that convinced the US Food and Drug Administration (FDA) to approve the technology. Michael Abramoff, MD, PhD, at the University of Iowa Department of Ophthalmology and Visual Sciences and his associates6 conducted a prospective study that compared the gold standard for detecting retinopathy, the Fundus Photograph Reading Center (of the University of Wisconsin School of Medicine and Public Health), to an ML-based algorithm, the commercialized IDx-DR. The IDx-DR is a software system that is used in combination with a fundal camera to capture retinal images. The researchers found that “the AI system exceeded all pre-specified superiority endpoints at sensitivity of 87.2% ... [and] specificity of 90.7% ....”

Continue to: The FDA clearance statement...

 

 

The FDA clearance statement for this technology7 limits its use, emphasizing that it is intended only as a screening tool, not a stand-alone diagnostic system. Because ­IDx-DR is being used in primary care, the FDA states that patients who have a positive result should be referred to an eye care professional. The technology is contraindicated in patients who have a history of laser treatment, surgery, or injection in the eye or who have any of the following: persistent vision loss, blurred vision, floaters, previously diagnosed macular edema, severe nonproliferative retinopathy, proliferative retinopathy, radiation retinopathy, and retinal vein occlusion. It is also not intended for pregnant patients because their eye disease often progresses rapidly.

A large-scale validation study performed on data from Kaiser Permanente Northwest found that it is possible to estimate a person's risk of colorectal cancer by using age, gender, and complete blood count.

Additional caveats to keep in mind when evaluating this new technology include that, although the software can help detect retinopathy, it does not address other key issues for this patient population, including cataracts and glaucoma. The cost of the new technology also requires attention: Software must be used in conjunction with a specific retinal camera, the Topcon TRC-NW400, which is expensive (new, as much as $20,000).

Eye with artificial intelligence
IMAGE: ©GETTY IMAGES

Speaking of cost: Health care providers and insurers still question whether implementing AI-enabled systems is cost-­effective. It is too early to say definitively how AI and machine learning will have an impact on health care expenditures, because the most promising technological systems have yet to be fully implemented in hospitals and medical practices nationwide. Projections by Forbes suggest that private investment in health care AI will reach $6.6 billion by 2021; on a more confident note, an Accenture analysis predicts that the best possible application of AI might save the health care sector $150 billion annually by 2026.8

What role might this diabetic retinopathy technology play in family medicine? Physicians are constantly advising patients who have diabetes about the need to have a regular ophthalmic examination to check for early signs of retinopathy—advice that is often ignored. The American Academy of Ophthalmology points out that “6 out of 10 people with diabetes skip a sight-saving exam.”9 When a patient is screened with this type of device and found to be at high risk of eye disease, however, the advice to see an eye-care specialist might carry more weight.

Screening colonoscopy: Improving patient incentives

No responsible physician doubts the value of screening colonoscopy in patients 50 years and older, but many patients have yet to realize that the procedure just might save their life. Is there a way to incentivize resistant patients to have a colonoscopy performed? An ML-based software system that only requires access to a few readily available parameters might be the needed impetus for many patients.

Continue to: A large-scale validation...

 

 

A large-scale validation study performed on data from Kaiser Permanente Northwest found that it is possible to estimate a person’s risk of colorectal cancer by using age, gender, and complete blood count.10 This retrospective investigation analyzed more than 17,000 Kaiser Permanente patients, including 900 who already had colorectal cancer. The analysis generated a risk score for patients who did not have the malignancy to gauge their likelihood of developing it. The algorithms were more sensitive for detecting tumors of the cecum and ascending colon, and less sensitive for detection of tumors of the transverse and sigmoid colon and rectum.

To provide more definitive evidence to support the value of the software platform, a prospective study was subsequently conducted on more than 79,000 patients who had initially declined to undergo colorectal screening. The platform, called ColonFlag, was used to detect 688 patients at highest risk, who were then offered screening colonoscopy. In this subgroup, 254 agreed to the procedure; ColonFlag identified 19 malignancies (7.5%) among patients within the Maccabi Health System (Israel), and 15 more in patients outside that health system.11 (In the United States, the same program is known as LGI Flag and has been cleared by the FDA.)

Although ColonFlag has the potential to reduce the incidence of colorectal cancer, other evidence-based screening modalities are highlighted in US Preventive Services Task Force guidelines, including the guaiac-based fecal occult blood test and the fecal immunochemical test.12

 

Beyond screening to applications in managing disease

The complex etiology of sepsis makes the condition difficult to treat. That complexity has also led to disagreement on the best course of management. Using an ML algorithm called an “Artificial Intelligence Clinician,” Komorowski and associates13 extracted data from a large data set from 2 nonoverlapping intensive care unit databases collected from US adults.The researchers’ analysis suggested a list of 48 variables that likely influence sepsis outcomes, including:

  • demographics,
  • Elixhauser premorbid status,
  • vital signs,
  • clinical laboratory data,
  • intravenous fluids given, and
  • vasopressors administered.

Komorowski and co-workers concluded that “… mortality was lowest in patients for whom clinicians’ actual doses matched the AI decisions. Our model provides individualized and clinically interpretable treatment decisions for sepsis that could improve patient outcomes.”

A randomized clinical trial has found that an ML program that uses only 6 common clinical markers—blood pressure, heart rate, temperature, respiratory rate, peripheral capillary oxygen saturation (SpO2), and age—can improve clinical outcomes in patients with severe sepsis.14 The alerts generated by the algorithm were used to guide treatment. Average length of stay was 13 days in controls, compared with 10.3 days in those evaluated with the ML algorithm. The algorithm was also associated with a 12.4% drop in in-­hospital mortality.

Continue to: Addressing challenges, tapping resources

 

 

Addressing challenges, tapping resources

Advances in the management of diabetic retinopathy, colorectal cancer, and sepsis are the tip of the AI iceberg. There are now ML programs to distinguish melanoma from benign nevi; to improve insulin dosing for patients with type 1 diabetes; to predict which hospital patients are most likely to end up in the intensive care unit; and to mitigate the opioid epidemic.

An ML Web page on the JAMA Network (https://sites.jamanetwork.com/machine-learning/) features a long list of published research studies, reviews, and opinion papers suggesting that the future of medicine is closely tied to innovative developments in this area. This Web page also addresses the potential use of ML in detecting lymph node metastases in breast cancer, the need to temper AI with human intelligence, the role of AI in clinical decision support, and more.

The JAMA Network also discusses a few of the challenges that still need to be overcome in developing ML tools for clinical medicine—challenges that you will want to be cognizant of as you evaluate new research in the field.

Black-box dilemma. A challenge that technologists face as they introduce new programs that have the potential to improve diagnosis, treatment, and prognosis is a phenomenon called the “black-box dilemma,” which refers to the complex data science, advanced statistics, and mathematical equations that underpin ML algorithms. These complexities make it difficult to explain the mechanism of action upon which software is based, which, in turn, makes many clinicians skeptical about its worth.

A randomized clinical trial has found that an ML program that uses only 6 common clinical markers can improve clinical outcomes in patients with severe sepsis.

For example, the neural networks that are the backbone of the retinopathy algorithm discussed earlier might seem like voodoo science to those unfamiliar with the technology. It’s fortunate that several technology-savvy physicians have mastered these digital tools and have the teaching skills to explain them in plain-English tutorials. One such tutorial, “Understanding How Machine Learning Works,” is posted on the JAMA Network (https://sites.­jamanetwork.com/machine-learning/#multimedia). A more basic explanation was included in a recent Public Broadcasting System “Nova” episode, viewable at www.youtube.com/watch?v=xS2G0oolHpo.

Continue to: Limited analysis

 

 

Limited analysis. Another problem that plagues many ML-based algorithms is that they have been tested on only a single data set. (Typically, a data set refers to a collection of clinical parameters from a patient population.) For example, researchers developing an algorithm might collect their data from a single health care system.

Several investigators have addressed this shortcoming by testing their software on 2 completely independent patient populations. Banda and colleagues15 recently developed a software platform to improve the detection rate in familial hypercholesterolemia, a significant cause of premature cardiovascular disease and death that affects approximately 1 of every 250 people. Despite the urgency of identifying the disorder and providing potentially lifesaving treatment, only 10% of patients receive an accurate diagnosis.16 Banda and colleagues developed a deep-learning algorithm that is far more effective than the traditional screening approach now in use.

To address the generalizability of the algorithm, it was tested on EHR data from 2 independent health care systems: Stanford Health Care and Geisinger Health System. In Stanford patients, the positive predictive value of the algorithm was 88%, with a sensitivity of 75%; it identified 84% of affected patients at the highest probability threshold. In Geisinger patients, the classifier generated a positive predictive value of 85%.

The future of these technologies

AI and ML are not panaceas that will revolutionize medicine in the near future. Likewise, the digital tools discussed in this article are not going to solve multiple complex medical problems addressed during a single office visit. But physicians who ignore mounting evidence that supports these emerging technologies will be left behind by more forward-thinking colleagues.

The best possible application of AI might save the health care sector $150 billion annually by 2026, according to an economic analysis.

A recent commentary in Gastroenterology17 sums up the situation best: “It is now too conservative to suggest that CADe [computer-assisted detection] and CADx [computer-assisted diagnosis] carry the potential to revolutionize colonoscopy. The artificial intelligence revolution has already begun.”

CORRESPONDENCE
Paul Cerrato, MA, cerrato@aol.com, pcerrato@optonline.net. John Halamka, MD, MS, john.halamka@bilh.org.

Computer technology and artificial intelligence (AI) have come a long way in several decades:

  • Between 1971 and 1996, access to the Medline database was primarily limited to university libraries and other institutions; in 1997, the database became universally available online as PubMed.1
  • In 2004, the President of the United States issued an executive order that launched a 10-year plan to put electronic health records (EHRs) in place nationwide; EHRs are now employed in nearly 9 of 10 (85.9%) medical offices.2

Over time, numerous online resources sprouted as well, including DxPlain, UpToDate, and Clinical Key, to name a few. These digital tools were impressive for their time, but many of them are now considered “old-school” AI-enabled clinical decision support.

In the past 2 to 3 years, innovative clinicians and technologists have pushed medicine into a new era that takes advantage of machine learning (ML)-enhanced diagnostic aids, software systems that predict disease progression, and advanced clinical pathways to help individualize treatment. Enthusiastic early adopters believe these resources are transforming patient care—although skeptics remain unconvinced, cautioning that they have yet to prove their worth in everyday clinical practice.

In this review, we first analyze the strengths and weaknesses of evidence supporting these tools, then propose a potential role for them in family medicine.

Machine learning takes on retinopathy

The term “artificial intelligence” has been with us for longer than a half century.3 In the broadest sense, AI refers to any computer system capable of automating a process usually performed manually by humans. But the latest innovations in AI take advantage of a subset of AI called “machine learning”: the ability of software systems to learn new functionality or insights on their own, without additional programming from human data engineers. Case in point: A software platform has been developed that is capable of diagnosing or screening for diabetic retinopathy without the involvement of an experienced ophthalmologist.

A software platform has been developed that is capable of diagnosing or screening for diabetic retinopathy without the involvement of an experienced ophthalmologist.

The landmark study that started clinicians and health care executives thinking seriously about the potential role of ML in medical practice was spearheaded by ­Varun Gulshan, PhD, at Google, and associates from several medical schools.4 Gulshan used an artificial neural network designed to mimic the functions of the human nervous system to analyze more than 128,000 retinal images, looking for evidence of diabetic retinopathy. (See “Deciphering artificial neural networks,” for an explanation of how such networks function.5) The algorithm they employed was compared with the diagnostic skills of several board-certified ophthalmologists.

[polldaddy:10453606]

Continue to: Deciperhing artificial neural networks

 

 

SIDEBAR
Deciphering artificial neural networks

The promise of health care information technology relies heavily on statistical methods and software constructs, including logistic regression, random forest modeling, clustering, and neural networks. The machine learning-enabled image analysis used to detect diabetic retinopathy and to differentiate a malignant melanoma and a normal mole is based on neural networking.

As we discussed in the body of this article, these networks mimic the nervous system, in that they comprise computer-generated “neurons,” or nodes, and are connected by “synapses” (FIGURE5). When a node in Layer 1 is excited by pixels coming from a scanned image, it sends on that excitement, represented by a numerical value, to a second set of nodes in Layer 2, which, in turns, sends signals to the next layer— and so on.

Eventually, the software’s interpretation of the pixels of the image reaches the output layer of the network, generating a negative or positive diagnosis. The initial process results in many interpretations, which are corrected by a backward analytic process called backpropagation. The video tutorials mentioned in the main text provide a more detailed explanation of neural networking.

How does a neural network operate?

 

Using an area-under-the-receiver operating curve (AUROC) as a metric, and choosing an operating point for high specificity, the algorithm generated sensitivity of 87% and 90.3% and specificity of 98.1% and 98.5% for 2 validation data sets for detecting referable retinopathy, as defined by a panel of at least 7 ophthalmologists. When AUROC was set for high sensitivity, the algorithm generated sensitivity of 97.5% and 96.1% and specificity of 93.4% and 93.9% for the 2 data sets.

These results are impressive, but the researchers used a retrospective approach in their analysis. A prospective analysis would provide stronger evidence.

That shortcoming was addressed by a pivotal clinical trial that convinced the US Food and Drug Administration (FDA) to approve the technology. Michael Abramoff, MD, PhD, at the University of Iowa Department of Ophthalmology and Visual Sciences and his associates6 conducted a prospective study that compared the gold standard for detecting retinopathy, the Fundus Photograph Reading Center (of the University of Wisconsin School of Medicine and Public Health), to an ML-based algorithm, the commercialized IDx-DR. The IDx-DR is a software system that is used in combination with a fundal camera to capture retinal images. The researchers found that “the AI system exceeded all pre-specified superiority endpoints at sensitivity of 87.2% ... [and] specificity of 90.7% ....”

Continue to: The FDA clearance statement...

 

 

The FDA clearance statement for this technology7 limits its use, emphasizing that it is intended only as a screening tool, not a stand-alone diagnostic system. Because ­IDx-DR is being used in primary care, the FDA states that patients who have a positive result should be referred to an eye care professional. The technology is contraindicated in patients who have a history of laser treatment, surgery, or injection in the eye or who have any of the following: persistent vision loss, blurred vision, floaters, previously diagnosed macular edema, severe nonproliferative retinopathy, proliferative retinopathy, radiation retinopathy, and retinal vein occlusion. It is also not intended for pregnant patients because their eye disease often progresses rapidly.

A large-scale validation study performed on data from Kaiser Permanente Northwest found that it is possible to estimate a person's risk of colorectal cancer by using age, gender, and complete blood count.

Additional caveats to keep in mind when evaluating this new technology include that, although the software can help detect retinopathy, it does not address other key issues for this patient population, including cataracts and glaucoma. The cost of the new technology also requires attention: Software must be used in conjunction with a specific retinal camera, the Topcon TRC-NW400, which is expensive (new, as much as $20,000).

Eye with artificial intelligence
IMAGE: ©GETTY IMAGES

Speaking of cost: Health care providers and insurers still question whether implementing AI-enabled systems is cost-­effective. It is too early to say definitively how AI and machine learning will have an impact on health care expenditures, because the most promising technological systems have yet to be fully implemented in hospitals and medical practices nationwide. Projections by Forbes suggest that private investment in health care AI will reach $6.6 billion by 2021; on a more confident note, an Accenture analysis predicts that the best possible application of AI might save the health care sector $150 billion annually by 2026.8

What role might this diabetic retinopathy technology play in family medicine? Physicians are constantly advising patients who have diabetes about the need to have a regular ophthalmic examination to check for early signs of retinopathy—advice that is often ignored. The American Academy of Ophthalmology points out that “6 out of 10 people with diabetes skip a sight-saving exam.”9 When a patient is screened with this type of device and found to be at high risk of eye disease, however, the advice to see an eye-care specialist might carry more weight.

Screening colonoscopy: Improving patient incentives

No responsible physician doubts the value of screening colonoscopy in patients 50 years and older, but many patients have yet to realize that the procedure just might save their life. Is there a way to incentivize resistant patients to have a colonoscopy performed? An ML-based software system that only requires access to a few readily available parameters might be the needed impetus for many patients.

Continue to: A large-scale validation...

 

 

A large-scale validation study performed on data from Kaiser Permanente Northwest found that it is possible to estimate a person’s risk of colorectal cancer by using age, gender, and complete blood count.10 This retrospective investigation analyzed more than 17,000 Kaiser Permanente patients, including 900 who already had colorectal cancer. The analysis generated a risk score for patients who did not have the malignancy to gauge their likelihood of developing it. The algorithms were more sensitive for detecting tumors of the cecum and ascending colon, and less sensitive for detection of tumors of the transverse and sigmoid colon and rectum.

To provide more definitive evidence to support the value of the software platform, a prospective study was subsequently conducted on more than 79,000 patients who had initially declined to undergo colorectal screening. The platform, called ColonFlag, was used to detect 688 patients at highest risk, who were then offered screening colonoscopy. In this subgroup, 254 agreed to the procedure; ColonFlag identified 19 malignancies (7.5%) among patients within the Maccabi Health System (Israel), and 15 more in patients outside that health system.11 (In the United States, the same program is known as LGI Flag and has been cleared by the FDA.)

Although ColonFlag has the potential to reduce the incidence of colorectal cancer, other evidence-based screening modalities are highlighted in US Preventive Services Task Force guidelines, including the guaiac-based fecal occult blood test and the fecal immunochemical test.12

 

Beyond screening to applications in managing disease

The complex etiology of sepsis makes the condition difficult to treat. That complexity has also led to disagreement on the best course of management. Using an ML algorithm called an “Artificial Intelligence Clinician,” Komorowski and associates13 extracted data from a large data set from 2 nonoverlapping intensive care unit databases collected from US adults.The researchers’ analysis suggested a list of 48 variables that likely influence sepsis outcomes, including:

  • demographics,
  • Elixhauser premorbid status,
  • vital signs,
  • clinical laboratory data,
  • intravenous fluids given, and
  • vasopressors administered.

Komorowski and co-workers concluded that “… mortality was lowest in patients for whom clinicians’ actual doses matched the AI decisions. Our model provides individualized and clinically interpretable treatment decisions for sepsis that could improve patient outcomes.”

A randomized clinical trial has found that an ML program that uses only 6 common clinical markers—blood pressure, heart rate, temperature, respiratory rate, peripheral capillary oxygen saturation (SpO2), and age—can improve clinical outcomes in patients with severe sepsis.14 The alerts generated by the algorithm were used to guide treatment. Average length of stay was 13 days in controls, compared with 10.3 days in those evaluated with the ML algorithm. The algorithm was also associated with a 12.4% drop in in-­hospital mortality.

Continue to: Addressing challenges, tapping resources

 

 

Addressing challenges, tapping resources

Advances in the management of diabetic retinopathy, colorectal cancer, and sepsis are the tip of the AI iceberg. There are now ML programs to distinguish melanoma from benign nevi; to improve insulin dosing for patients with type 1 diabetes; to predict which hospital patients are most likely to end up in the intensive care unit; and to mitigate the opioid epidemic.

An ML Web page on the JAMA Network (https://sites.jamanetwork.com/machine-learning/) features a long list of published research studies, reviews, and opinion papers suggesting that the future of medicine is closely tied to innovative developments in this area. This Web page also addresses the potential use of ML in detecting lymph node metastases in breast cancer, the need to temper AI with human intelligence, the role of AI in clinical decision support, and more.

The JAMA Network also discusses a few of the challenges that still need to be overcome in developing ML tools for clinical medicine—challenges that you will want to be cognizant of as you evaluate new research in the field.

Black-box dilemma. A challenge that technologists face as they introduce new programs that have the potential to improve diagnosis, treatment, and prognosis is a phenomenon called the “black-box dilemma,” which refers to the complex data science, advanced statistics, and mathematical equations that underpin ML algorithms. These complexities make it difficult to explain the mechanism of action upon which software is based, which, in turn, makes many clinicians skeptical about its worth.

A randomized clinical trial has found that an ML program that uses only 6 common clinical markers can improve clinical outcomes in patients with severe sepsis.

For example, the neural networks that are the backbone of the retinopathy algorithm discussed earlier might seem like voodoo science to those unfamiliar with the technology. It’s fortunate that several technology-savvy physicians have mastered these digital tools and have the teaching skills to explain them in plain-English tutorials. One such tutorial, “Understanding How Machine Learning Works,” is posted on the JAMA Network (https://sites.­jamanetwork.com/machine-learning/#multimedia). A more basic explanation was included in a recent Public Broadcasting System “Nova” episode, viewable at www.youtube.com/watch?v=xS2G0oolHpo.

Continue to: Limited analysis

 

 

Limited analysis. Another problem that plagues many ML-based algorithms is that they have been tested on only a single data set. (Typically, a data set refers to a collection of clinical parameters from a patient population.) For example, researchers developing an algorithm might collect their data from a single health care system.

Several investigators have addressed this shortcoming by testing their software on 2 completely independent patient populations. Banda and colleagues15 recently developed a software platform to improve the detection rate in familial hypercholesterolemia, a significant cause of premature cardiovascular disease and death that affects approximately 1 of every 250 people. Despite the urgency of identifying the disorder and providing potentially lifesaving treatment, only 10% of patients receive an accurate diagnosis.16 Banda and colleagues developed a deep-learning algorithm that is far more effective than the traditional screening approach now in use.

To address the generalizability of the algorithm, it was tested on EHR data from 2 independent health care systems: Stanford Health Care and Geisinger Health System. In Stanford patients, the positive predictive value of the algorithm was 88%, with a sensitivity of 75%; it identified 84% of affected patients at the highest probability threshold. In Geisinger patients, the classifier generated a positive predictive value of 85%.

The future of these technologies

AI and ML are not panaceas that will revolutionize medicine in the near future. Likewise, the digital tools discussed in this article are not going to solve multiple complex medical problems addressed during a single office visit. But physicians who ignore mounting evidence that supports these emerging technologies will be left behind by more forward-thinking colleagues.

The best possible application of AI might save the health care sector $150 billion annually by 2026, according to an economic analysis.

A recent commentary in Gastroenterology17 sums up the situation best: “It is now too conservative to suggest that CADe [computer-assisted detection] and CADx [computer-assisted diagnosis] carry the potential to revolutionize colonoscopy. The artificial intelligence revolution has already begun.”

CORRESPONDENCE
Paul Cerrato, MA, cerrato@aol.com, pcerrato@optonline.net. John Halamka, MD, MS, john.halamka@bilh.org.

References

1. Lindberg DA. Internet access to National Library of Medicine. Eff Clin Pract. 2000;3:256-260.

2. National Center for Health Statistics, Centers for Disease Control and Prevention. Electronic medical records/electronic health records (EMRs/EHRs). www.cdc.gov/nchs/fastats/electronic­-medical-records.htm. Updated March 31, 2017. Accessed October 1, 2019.

3. Smith C, McGuire B, Huang T, et al. The history of artificial intelligence. University of Washington. https://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf. Published December 2006. Accessed October 1, 2019.

4. Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA; 2016;316:2402-2410.

5. Cerrato P, Halamka J. The Transformative Power of Mobile Medicine. Cambridge, MA: Academic Press; 2019.

6. Abràmoff MD, Lavin PT, Birch M, et al. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digit Med. 2018;1:39.

7. US Food and Drug Administration. FDA permits marketing of artificial intelligence-based device to detect certain diabetes-related eye problems. Press release. www.fda.gov/news-events/press-announcements/fda-permits-marketing-artificial-­intelligence-based-device-detect-certain-diabetes-related-eye. Published April 11, 2018. Accessed October 1, 2019.

8. AI and healthcare: a giant opportunity. Forbes Web site. www.forbes.com/sites/insights-intelai/2019/02/11/ai-and-healthcare-a-giant-opportunity/#5906c4014c68. Published February 11, 2019. Accessed October 25, 2019.

9. Boyd K. Six out of 10 people with diabetes skip a sight-saving exam. American Academy of Ophthalmology Website. https://www.aao.org/eye-health/news/sixty-percent-skip-diabetic-eye-exams. Published November 1, 2016. Accessed October 25, 2019.

10. Hornbrook MC, Goshen R, Choman E, et al. Early colorectal cancer detected by machine learning model using gender, age, and complete blood count data. Dig Dis Sci. 2017;62:2719-2727.

11. Goshen R, Choman E, Ran A, et al. Computer-assisted flagging of individuals at high risk of colorectal cancer in a large health maintenance organization using the ColonFlag test. JCO Clin Cancer Inform. 2018;2:1-8.

12. US Preventive Services Task Force. Final recommendation statement: colorectal cancer: screening. www.uspreventiveservicestaskforce.org/Page/Document/RecommendationStatementFinal/colorectal-cancer-screening2#tab. Published May 2019. Accessed October 1, 2019.

13. Komorowski M, Celi LA, Badawi O, et al. The artificial intelligence clinician learns optimal treatment strategies for sepsis in intensive care. Nat Med. 2018;24:1716-1720.

14. Shimabukuro DW, Barton CW, Feldman MD, et al. Effect of a machine learning-based severe sepsis prediction algorithm on patient survival and hospital length of stay: a randomised clinical trial. BMJ Open Respir Res. 2017;4:e000234.

15. Banda J, Sarraju A, Abbasi F, et al. Finding missed cases of familial hypercholesterolemia in health systems using machine learning. NPJ Digit Med. 2019;2:23.

16. What is familial hypercholesterolemia? FH Foundation Web site. https://thefhfoundation.org/familial-hypercholesterolemia/what-is-familial-hypercholesterolemia. Accessed November 1, 2019.

17. Byrne MF, Shahidi N, Rex DK. Will computer-aided detection and diagnosis revolutionize colonoscopy? Gastroenterology. 2017;153:1460-1464.E1.

References

1. Lindberg DA. Internet access to National Library of Medicine. Eff Clin Pract. 2000;3:256-260.

2. National Center for Health Statistics, Centers for Disease Control and Prevention. Electronic medical records/electronic health records (EMRs/EHRs). www.cdc.gov/nchs/fastats/electronic­-medical-records.htm. Updated March 31, 2017. Accessed October 1, 2019.

3. Smith C, McGuire B, Huang T, et al. The history of artificial intelligence. University of Washington. https://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf. Published December 2006. Accessed October 1, 2019.

4. Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA; 2016;316:2402-2410.

5. Cerrato P, Halamka J. The Transformative Power of Mobile Medicine. Cambridge, MA: Academic Press; 2019.

6. Abràmoff MD, Lavin PT, Birch M, et al. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digit Med. 2018;1:39.

7. US Food and Drug Administration. FDA permits marketing of artificial intelligence-based device to detect certain diabetes-related eye problems. Press release. www.fda.gov/news-events/press-announcements/fda-permits-marketing-artificial-­intelligence-based-device-detect-certain-diabetes-related-eye. Published April 11, 2018. Accessed October 1, 2019.

8. AI and healthcare: a giant opportunity. Forbes Web site. www.forbes.com/sites/insights-intelai/2019/02/11/ai-and-healthcare-a-giant-opportunity/#5906c4014c68. Published February 11, 2019. Accessed October 25, 2019.

9. Boyd K. Six out of 10 people with diabetes skip a sight-saving exam. American Academy of Ophthalmology Website. https://www.aao.org/eye-health/news/sixty-percent-skip-diabetic-eye-exams. Published November 1, 2016. Accessed October 25, 2019.

10. Hornbrook MC, Goshen R, Choman E, et al. Early colorectal cancer detected by machine learning model using gender, age, and complete blood count data. Dig Dis Sci. 2017;62:2719-2727.

11. Goshen R, Choman E, Ran A, et al. Computer-assisted flagging of individuals at high risk of colorectal cancer in a large health maintenance organization using the ColonFlag test. JCO Clin Cancer Inform. 2018;2:1-8.

12. US Preventive Services Task Force. Final recommendation statement: colorectal cancer: screening. www.uspreventiveservicestaskforce.org/Page/Document/RecommendationStatementFinal/colorectal-cancer-screening2#tab. Published May 2019. Accessed October 1, 2019.

13. Komorowski M, Celi LA, Badawi O, et al. The artificial intelligence clinician learns optimal treatment strategies for sepsis in intensive care. Nat Med. 2018;24:1716-1720.

14. Shimabukuro DW, Barton CW, Feldman MD, et al. Effect of a machine learning-based severe sepsis prediction algorithm on patient survival and hospital length of stay: a randomised clinical trial. BMJ Open Respir Res. 2017;4:e000234.

15. Banda J, Sarraju A, Abbasi F, et al. Finding missed cases of familial hypercholesterolemia in health systems using machine learning. NPJ Digit Med. 2019;2:23.

16. What is familial hypercholesterolemia? FH Foundation Web site. https://thefhfoundation.org/familial-hypercholesterolemia/what-is-familial-hypercholesterolemia. Accessed November 1, 2019.

17. Byrne MF, Shahidi N, Rex DK. Will computer-aided detection and diagnosis revolutionize colonoscopy? Gastroenterology. 2017;153:1460-1464.E1.

Issue
The Journal of Family Practice - 68(9)
Issue
The Journal of Family Practice - 68(9)
Page Number
486-488,490-492
Page Number
486-488,490-492
Publications
Publications
Topics
Article Type
Display Headline
An FP’s guide to AI-enabled clinical decision support
Display Headline
An FP’s guide to AI-enabled clinical decision support
Sections
Inside the Article

PRACTICE RECOMMENDATIONS

› Encourage patients with diabetes who are unwilling to have a regular eye exam to have an artificial intelligence-based retinal scan that can detect retinopathy. B

› Consider using a machine learning-based algorithm to help evaluate the risk of colorectal cancer in patients who are resistant to screening colonoscopy. B

› Question the effectiveness of any artificial intelligence-based software algorithm that has not been validated by at least 2 independent data sets derived from clinical parameters. B

Strength of recommendation (SOR)

A Good-quality patient-oriented evidence
B Inconsistent or limited-quality patient-oriented evidence
C Consensus, usual practice, opinion, disease-oriented evidence, case series

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
PubMed ID
31725133
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Sleep vs. Netflix, and grape juice BPAP

Article Type
Changed
Fri, 11/08/2019 - 15:19

 

Sleep vs. Netflix: the eternal struggle

Ladies and gentlemen, welcome to Livin’ on the MDedge World Championship Boxing! Tonight, we bring you a classic match-up in the endless battle for your valuable time.

A man sleeps in upright position on his couch.
jackscoldsweat/Getty Images

In the red corner, weighing in at a muscular 8 hours, is the defending champion: a good night’s sleep! And now for the challenger in the blue corner, coming in at a strong “just one more episode, I promise,” it’s binge watching!

Oh, sleep opens the match strong: According to a survey from the American Academy of Sleep Medicine, U.S. adults rank sleep as their second-most important priority, with only family beating it out. My goodness, that is a strong opening offensive.

But wait, binge watching is countering! According to the very same survey, 88% of Americans have admitted that they’d lost sleep because they’d stayed up late to watch extra episodes of a TV show or streaming series, a rate that rises to 95% in people aged 18-44 years. Oh dear, sleep looks like it’s in trouble.

Hang on, what’s binge watching doing? It’s unleashing a quick barrage of attacks: 72% of men aged 18-34 reported delaying sleep for video games, two-thirds of U.S. adults reported losing sleep to read a book, and nearly 60% of adults delayed sleep to watch sports. We feel slightly conflicted about our metaphor choice now.

And with a final haymaker from “guess I’ll watch ‘The Office’ for a sixth time,” binge watching has defeated the defending champion! Be sure to tune in next week, when alcohol takes on common sense. A true fight for the ages there.
 

Lead us not into temptation

Can anyone resist the temptation of binge watching? Can no one swim against the sleep-depriving, show-streaming current? Is resistance to an “Orange Is the New Black” bender futile?

A chocolate doughnut with sprinkles
spaxiax/Getty Images

University of Wyoming researchers say there’s hope. Those who would sleep svelte and sound in a world of streaming services and Krispy Kreme must plan ahead to tame temptation.

Proactive temptation management begins long before those chocolate iced glazed with sprinkles appear at the nurses’ station. Planning your response ahead of time increases the odds that the first episode of “Stranger Things” is also the evening’s last episode.

Using psychology’s human lab mice – undergraduate students – the researchers tested five temptation-proofing self-control strategies.

The first strategy: situation selection. If “Game of Thrones” is on in the den, avoid the room as if it were an unmucked House Lannister horse stall. Second: situation modification. Is your spouse hotboxing GoT on an iPad next to you in the bed? Politely suggest that GoT is even better when viewed on the living room sofa.

The third strategy: distraction. Enjoy the wholesome snap of a Finn Crisp while your coworkers destroy those Krispy Kremes like Daenerys leveling King’s Landing. Fourth: reappraisal. Tell yourself that season 2 of “Ozark” can’t surpass season 1, and will simply swindle you of your precious time. And fifth, the Nancy-Reagan, temptation-resistance classic: response inhibition. When offered the narcotic that is “Breaking Bad,” just say no!

Which temptation strategies worked best?

Planning ahead with one through four led fewer Cowboy State undergrads into temptation.

As for responding in the moment? Well, the Krispy Kremes would’ve never lasted past season 2 of “The Great British Baking Show.”
 

 

 

Stuck between a tongue and a hard place

There once was a 7-year-old boy who loved grape juice. He loved grape juice so much that he didn’t want to waste any after drinking a bottle of the stuff.

Three bottles of juice
Shablon/Getty Images

To get every last drop, he tried to use his tongue to lick the inside of a grape juice bottle. One particular bottle, however, was evil and had other plans. It grabbed his tongue and wouldn’t let go, even after his mother tried to help him.

She took him to the great healing wizards at Auf der Bult Children’s Hospital in Hannover, Germany – which is quite surprising, because they live in New Jersey. [Just kidding, they’re from Hannover – just checking to see if you’re paying attention.]

When their magic wands didn’t work, doctors at the hospital mildly sedated the boy with midazolam and esketamine and then advanced a 70-mm plastic button cannula between the neck of the bottle and his tongue, hoping to release the presumed vacuum. No such luck.

It was at that point that the greatest of all the wizards, Dr. Christoph Eich, a pediatric anesthesiologist at the hospital, remembered having a similar problem with a particularly villainous bottle of “grape juice” during his magical training days some 20 years earlier.

The solution then, he discovered, was to connect the cannula to a syringe and inject air into the bottle to produce positive pressure and force out the foreign object.

Dr. Eich’s reinvention of BPAP (bottle positive airway pressure) worked on the child, who, once the purple discoloration of his tongue faded after 3 days, was none the worse for wear and lived happily ever after.

We’re just wondering if the good doctor told the child’s mother that the original situation involved a bottle of wine that couldn’t be opened because no one had a corkscrew. Well, maybe she reads the European Journal of Anaesthesiology.

Cartoon

Publications
Topics
Sections

 

Sleep vs. Netflix: the eternal struggle

Ladies and gentlemen, welcome to Livin’ on the MDedge World Championship Boxing! Tonight, we bring you a classic match-up in the endless battle for your valuable time.

A man sleeps in upright position on his couch.
jackscoldsweat/Getty Images

In the red corner, weighing in at a muscular 8 hours, is the defending champion: a good night’s sleep! And now for the challenger in the blue corner, coming in at a strong “just one more episode, I promise,” it’s binge watching!

Oh, sleep opens the match strong: According to a survey from the American Academy of Sleep Medicine, U.S. adults rank sleep as their second-most important priority, with only family beating it out. My goodness, that is a strong opening offensive.

But wait, binge watching is countering! According to the very same survey, 88% of Americans have admitted that they’d lost sleep because they’d stayed up late to watch extra episodes of a TV show or streaming series, a rate that rises to 95% in people aged 18-44 years. Oh dear, sleep looks like it’s in trouble.

Hang on, what’s binge watching doing? It’s unleashing a quick barrage of attacks: 72% of men aged 18-34 reported delaying sleep for video games, two-thirds of U.S. adults reported losing sleep to read a book, and nearly 60% of adults delayed sleep to watch sports. We feel slightly conflicted about our metaphor choice now.

And with a final haymaker from “guess I’ll watch ‘The Office’ for a sixth time,” binge watching has defeated the defending champion! Be sure to tune in next week, when alcohol takes on common sense. A true fight for the ages there.
 

Lead us not into temptation

Can anyone resist the temptation of binge watching? Can no one swim against the sleep-depriving, show-streaming current? Is resistance to an “Orange Is the New Black” bender futile?

A chocolate doughnut with sprinkles
spaxiax/Getty Images

University of Wyoming researchers say there’s hope. Those who would sleep svelte and sound in a world of streaming services and Krispy Kreme must plan ahead to tame temptation.

Proactive temptation management begins long before those chocolate iced glazed with sprinkles appear at the nurses’ station. Planning your response ahead of time increases the odds that the first episode of “Stranger Things” is also the evening’s last episode.

Using psychology’s human lab mice – undergraduate students – the researchers tested five temptation-proofing self-control strategies.

The first strategy: situation selection. If “Game of Thrones” is on in the den, avoid the room as if it were an unmucked House Lannister horse stall. Second: situation modification. Is your spouse hotboxing GoT on an iPad next to you in the bed? Politely suggest that GoT is even better when viewed on the living room sofa.

The third strategy: distraction. Enjoy the wholesome snap of a Finn Crisp while your coworkers destroy those Krispy Kremes like Daenerys leveling King’s Landing. Fourth: reappraisal. Tell yourself that season 2 of “Ozark” can’t surpass season 1, and will simply swindle you of your precious time. And fifth, the Nancy-Reagan, temptation-resistance classic: response inhibition. When offered the narcotic that is “Breaking Bad,” just say no!

Which temptation strategies worked best?

Planning ahead with one through four led fewer Cowboy State undergrads into temptation.

As for responding in the moment? Well, the Krispy Kremes would’ve never lasted past season 2 of “The Great British Baking Show.”
 

 

 

Stuck between a tongue and a hard place

There once was a 7-year-old boy who loved grape juice. He loved grape juice so much that he didn’t want to waste any after drinking a bottle of the stuff.

Three bottles of juice
Shablon/Getty Images

To get every last drop, he tried to use his tongue to lick the inside of a grape juice bottle. One particular bottle, however, was evil and had other plans. It grabbed his tongue and wouldn’t let go, even after his mother tried to help him.

She took him to the great healing wizards at Auf der Bult Children’s Hospital in Hannover, Germany – which is quite surprising, because they live in New Jersey. [Just kidding, they’re from Hannover – just checking to see if you’re paying attention.]

When their magic wands didn’t work, doctors at the hospital mildly sedated the boy with midazolam and esketamine and then advanced a 70-mm plastic button cannula between the neck of the bottle and his tongue, hoping to release the presumed vacuum. No such luck.

It was at that point that the greatest of all the wizards, Dr. Christoph Eich, a pediatric anesthesiologist at the hospital, remembered having a similar problem with a particularly villainous bottle of “grape juice” during his magical training days some 20 years earlier.

The solution then, he discovered, was to connect the cannula to a syringe and inject air into the bottle to produce positive pressure and force out the foreign object.

Dr. Eich’s reinvention of BPAP (bottle positive airway pressure) worked on the child, who, once the purple discoloration of his tongue faded after 3 days, was none the worse for wear and lived happily ever after.

We’re just wondering if the good doctor told the child’s mother that the original situation involved a bottle of wine that couldn’t be opened because no one had a corkscrew. Well, maybe she reads the European Journal of Anaesthesiology.

Cartoon

 

Sleep vs. Netflix: the eternal struggle

Ladies and gentlemen, welcome to Livin’ on the MDedge World Championship Boxing! Tonight, we bring you a classic match-up in the endless battle for your valuable time.

A man sleeps in upright position on his couch.
jackscoldsweat/Getty Images

In the red corner, weighing in at a muscular 8 hours, is the defending champion: a good night’s sleep! And now for the challenger in the blue corner, coming in at a strong “just one more episode, I promise,” it’s binge watching!

Oh, sleep opens the match strong: According to a survey from the American Academy of Sleep Medicine, U.S. adults rank sleep as their second-most important priority, with only family beating it out. My goodness, that is a strong opening offensive.

But wait, binge watching is countering! According to the very same survey, 88% of Americans have admitted that they’d lost sleep because they’d stayed up late to watch extra episodes of a TV show or streaming series, a rate that rises to 95% in people aged 18-44 years. Oh dear, sleep looks like it’s in trouble.

Hang on, what’s binge watching doing? It’s unleashing a quick barrage of attacks: 72% of men aged 18-34 reported delaying sleep for video games, two-thirds of U.S. adults reported losing sleep to read a book, and nearly 60% of adults delayed sleep to watch sports. We feel slightly conflicted about our metaphor choice now.

And with a final haymaker from “guess I’ll watch ‘The Office’ for a sixth time,” binge watching has defeated the defending champion! Be sure to tune in next week, when alcohol takes on common sense. A true fight for the ages there.
 

Lead us not into temptation

Can anyone resist the temptation of binge watching? Can no one swim against the sleep-depriving, show-streaming current? Is resistance to an “Orange Is the New Black” bender futile?

A chocolate doughnut with sprinkles
spaxiax/Getty Images

University of Wyoming researchers say there’s hope. Those who would sleep svelte and sound in a world of streaming services and Krispy Kreme must plan ahead to tame temptation.

Proactive temptation management begins long before those chocolate iced glazed with sprinkles appear at the nurses’ station. Planning your response ahead of time increases the odds that the first episode of “Stranger Things” is also the evening’s last episode.

Using psychology’s human lab mice – undergraduate students – the researchers tested five temptation-proofing self-control strategies.

The first strategy: situation selection. If “Game of Thrones” is on in the den, avoid the room as if it were an unmucked House Lannister horse stall. Second: situation modification. Is your spouse hotboxing GoT on an iPad next to you in the bed? Politely suggest that GoT is even better when viewed on the living room sofa.

The third strategy: distraction. Enjoy the wholesome snap of a Finn Crisp while your coworkers destroy those Krispy Kremes like Daenerys leveling King’s Landing. Fourth: reappraisal. Tell yourself that season 2 of “Ozark” can’t surpass season 1, and will simply swindle you of your precious time. And fifth, the Nancy-Reagan, temptation-resistance classic: response inhibition. When offered the narcotic that is “Breaking Bad,” just say no!

Which temptation strategies worked best?

Planning ahead with one through four led fewer Cowboy State undergrads into temptation.

As for responding in the moment? Well, the Krispy Kremes would’ve never lasted past season 2 of “The Great British Baking Show.”
 

 

 

Stuck between a tongue and a hard place

There once was a 7-year-old boy who loved grape juice. He loved grape juice so much that he didn’t want to waste any after drinking a bottle of the stuff.

Three bottles of juice
Shablon/Getty Images

To get every last drop, he tried to use his tongue to lick the inside of a grape juice bottle. One particular bottle, however, was evil and had other plans. It grabbed his tongue and wouldn’t let go, even after his mother tried to help him.

She took him to the great healing wizards at Auf der Bult Children’s Hospital in Hannover, Germany – which is quite surprising, because they live in New Jersey. [Just kidding, they’re from Hannover – just checking to see if you’re paying attention.]

When their magic wands didn’t work, doctors at the hospital mildly sedated the boy with midazolam and esketamine and then advanced a 70-mm plastic button cannula between the neck of the bottle and his tongue, hoping to release the presumed vacuum. No such luck.

It was at that point that the greatest of all the wizards, Dr. Christoph Eich, a pediatric anesthesiologist at the hospital, remembered having a similar problem with a particularly villainous bottle of “grape juice” during his magical training days some 20 years earlier.

The solution then, he discovered, was to connect the cannula to a syringe and inject air into the bottle to produce positive pressure and force out the foreign object.

Dr. Eich’s reinvention of BPAP (bottle positive airway pressure) worked on the child, who, once the purple discoloration of his tongue faded after 3 days, was none the worse for wear and lived happily ever after.

We’re just wondering if the good doctor told the child’s mother that the original situation involved a bottle of wine that couldn’t be opened because no one had a corkscrew. Well, maybe she reads the European Journal of Anaesthesiology.

Cartoon

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Melanoma incidence continues to increase, yet mortality stabilizing

Article Type
Changed
Thu, 11/07/2019 - 15:41

– The incidence of melanoma in the United States continues to increase, yet mortality from the disease has been stable and may even be starting to decline, according to data from the National Cancer Institute’s Surveillance, Epidemiology, and End Results (SEER) program.

Director of clinical trials at the University of Pittsburgh Medical Center’s Department of Dermatology.
Dr. Laura Ferris

At the Skin Disease Education Foundation’s annual Las Vegas Dermatology Seminar, Laura Korb Ferris, MD, PhD, said that SEER data project 96,480 new cases of melanoma in 2019, as well as 7,230 deaths from the disease. In 2016, SEER projected 10,130 deaths from melanoma, “so we’re actually projecting a reduction in melanoma deaths,” said Dr. Ferris, director of clinical trials at the University of Pittsburgh Medical Center’s department of dermatology. She added that the death rate from melanoma in 2016 was 2.17 per 100,000 population, a reduction from 2.69 per 100,000 population in 2011, “so it looks like melanoma mortality may be stable,” or even reduced, despite an increase in melanoma incidence.

A study of SEER data between 1989 and 2009 found that melanoma incidence is increasing across all lesion thicknesses (J Natl Cancer Inst. 2015 Nov 12. doi: 10.1093/jnci/djv294). Specifically, the incidence increased most among thin lesions, but there was a smaller increased incidence of thick melanoma. “This suggests that the overall burden of disease is truly increasing, but it is primarily stemming from an increase in T1/T2 disease,” Dr. Ferris said. “This could be due in part to increased early detection.”

Improvements in melanoma-specific survival, she continued, are likely a combination of improved management of T4 disease, a shift toward detection of thinner T1/T2 melanoma, and increased detection of T1/T2 disease.

The SEER data also showed that the incidence of fatal cases of melanoma has decreased since 1989, but only in thick melanomas. This trend may indicate a modest improvement in the management of T4 tumors. “Optimistically, I think increased detection efforts are improving survival by early detection of thin but ultimately fatal melanomas,” Dr. Ferris said. “Hopefully we are finding disease earlier and we are preventing patients from progressing to these fatal T4 melanomas.”

Disparities in melanoma-specific survival also come into play. Men have poorer survival compared with women, whites have the highest survival, and non-Hispanic whites have a better survival than Hispanic whites, Dr. Ferris said, while lower rates of survival are seen in blacks and nonblack minorities, as well as among those in high poverty and those who are separated/nonmarried. Lesion type also matters. The highest survival is seen in those with superficial spreading melanoma, while lower survival is observed in those with nodular melanoma, and acral lentiginous melanoma.

 

 


Early detection of thin nodular melanomas has the potential to significantly impact melanoma mortality, “but we want to keep in mind that the majority of ultimately fatal melanomas are superficial spreading melanomas,” Dr. Ferris said. “That is because they are so much more prevalent. As a dermatologist, I think a lot about screening and early detection. Periodic screening is a good strategy for a slower-growing superficial spreading melanoma, but it’s not necessarily a good strategy for a rapidly growing nodular melanoma. That’s going to require better education and better access to health care.”



Self-detection of melanoma is another strategy to consider. According to Dr. Ferris, results from multiple studies suggest that about 50% of all melanomas are detected by patients, but the ones they find tend to be thicker than the ones that clinicians detect during office visits. “It would be great if we can get that number higher than 50%,” Dr. Ferris said. “If patients really understood what melanoma is, what it looks like, and when they needed to seek medical attention, perhaps we could get that over 50% and see self-detection of thinner melanomas. That’s a very low-cost intervention.”

Targeted screening efforts that stratify by risk factors and by age “makes screening more efficient and more cost-effective,” she added. She cited one analysis, which found that clinicians need to screen 606 people and conduct 25 biopsies in order to find one melanoma. “That’s very resource intensive,” she said. “However, if you only screened people 50 or older or 65 or older, the number needed to screen goes down, and because your pretest probability is higher, your number need to biopsy goes down as well. If you factor in things like a history of atypical nevi or a personal history of melanoma, those patients are at a higher risk of developing melanoma.”

Dr. Ferris closed her presentation by noting that Australia leads other countries in melanoma prevention efforts. There, the combined incidence of skin cancer is higher than the incidence of any other type of cancer. Four decades ago, Australian health officials launched SunSmart, a series of initiatives intended to reduce skin cancer. These include implementation of policies for hat wearing and shade provision in schools and at work, availability of more effective sunscreens, inclusion of sun protection items as a tax-deductible expense for outdoor workers, increased availability since the 1980s of long-sleeved sun protective swimwear, a ban on the use of indoor tanning since 2014, provision of UV forecasts in weather, and a comprehensive program of grants for community shade structures (PLoSMed. 2019 Oct 8;16[10]:e1002932).

“One approach to melanoma prevention won’t fit all,” she concluded. “We need to focus on prevention, public education to improve knowledge and self-detection.”

Dr. Ferris disclosed that she is a consultant to and an investigator for DermTech and Scibase. She is also an investigator for Castle Biosciences.

SDEF and this news organization are owned by the same parent company. Dr. Ferris spoke during a forum on cutaneous malignancies at the meeting.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– The incidence of melanoma in the United States continues to increase, yet mortality from the disease has been stable and may even be starting to decline, according to data from the National Cancer Institute’s Surveillance, Epidemiology, and End Results (SEER) program.

Director of clinical trials at the University of Pittsburgh Medical Center’s Department of Dermatology.
Dr. Laura Ferris

At the Skin Disease Education Foundation’s annual Las Vegas Dermatology Seminar, Laura Korb Ferris, MD, PhD, said that SEER data project 96,480 new cases of melanoma in 2019, as well as 7,230 deaths from the disease. In 2016, SEER projected 10,130 deaths from melanoma, “so we’re actually projecting a reduction in melanoma deaths,” said Dr. Ferris, director of clinical trials at the University of Pittsburgh Medical Center’s department of dermatology. She added that the death rate from melanoma in 2016 was 2.17 per 100,000 population, a reduction from 2.69 per 100,000 population in 2011, “so it looks like melanoma mortality may be stable,” or even reduced, despite an increase in melanoma incidence.

A study of SEER data between 1989 and 2009 found that melanoma incidence is increasing across all lesion thicknesses (J Natl Cancer Inst. 2015 Nov 12. doi: 10.1093/jnci/djv294). Specifically, the incidence increased most among thin lesions, but there was a smaller increased incidence of thick melanoma. “This suggests that the overall burden of disease is truly increasing, but it is primarily stemming from an increase in T1/T2 disease,” Dr. Ferris said. “This could be due in part to increased early detection.”

Improvements in melanoma-specific survival, she continued, are likely a combination of improved management of T4 disease, a shift toward detection of thinner T1/T2 melanoma, and increased detection of T1/T2 disease.

The SEER data also showed that the incidence of fatal cases of melanoma has decreased since 1989, but only in thick melanomas. This trend may indicate a modest improvement in the management of T4 tumors. “Optimistically, I think increased detection efforts are improving survival by early detection of thin but ultimately fatal melanomas,” Dr. Ferris said. “Hopefully we are finding disease earlier and we are preventing patients from progressing to these fatal T4 melanomas.”

Disparities in melanoma-specific survival also come into play. Men have poorer survival compared with women, whites have the highest survival, and non-Hispanic whites have a better survival than Hispanic whites, Dr. Ferris said, while lower rates of survival are seen in blacks and nonblack minorities, as well as among those in high poverty and those who are separated/nonmarried. Lesion type also matters. The highest survival is seen in those with superficial spreading melanoma, while lower survival is observed in those with nodular melanoma, and acral lentiginous melanoma.

 

 


Early detection of thin nodular melanomas has the potential to significantly impact melanoma mortality, “but we want to keep in mind that the majority of ultimately fatal melanomas are superficial spreading melanomas,” Dr. Ferris said. “That is because they are so much more prevalent. As a dermatologist, I think a lot about screening and early detection. Periodic screening is a good strategy for a slower-growing superficial spreading melanoma, but it’s not necessarily a good strategy for a rapidly growing nodular melanoma. That’s going to require better education and better access to health care.”



Self-detection of melanoma is another strategy to consider. According to Dr. Ferris, results from multiple studies suggest that about 50% of all melanomas are detected by patients, but the ones they find tend to be thicker than the ones that clinicians detect during office visits. “It would be great if we can get that number higher than 50%,” Dr. Ferris said. “If patients really understood what melanoma is, what it looks like, and when they needed to seek medical attention, perhaps we could get that over 50% and see self-detection of thinner melanomas. That’s a very low-cost intervention.”

Targeted screening efforts that stratify by risk factors and by age “makes screening more efficient and more cost-effective,” she added. She cited one analysis, which found that clinicians need to screen 606 people and conduct 25 biopsies in order to find one melanoma. “That’s very resource intensive,” she said. “However, if you only screened people 50 or older or 65 or older, the number needed to screen goes down, and because your pretest probability is higher, your number need to biopsy goes down as well. If you factor in things like a history of atypical nevi or a personal history of melanoma, those patients are at a higher risk of developing melanoma.”

Dr. Ferris closed her presentation by noting that Australia leads other countries in melanoma prevention efforts. There, the combined incidence of skin cancer is higher than the incidence of any other type of cancer. Four decades ago, Australian health officials launched SunSmart, a series of initiatives intended to reduce skin cancer. These include implementation of policies for hat wearing and shade provision in schools and at work, availability of more effective sunscreens, inclusion of sun protection items as a tax-deductible expense for outdoor workers, increased availability since the 1980s of long-sleeved sun protective swimwear, a ban on the use of indoor tanning since 2014, provision of UV forecasts in weather, and a comprehensive program of grants for community shade structures (PLoSMed. 2019 Oct 8;16[10]:e1002932).

“One approach to melanoma prevention won’t fit all,” she concluded. “We need to focus on prevention, public education to improve knowledge and self-detection.”

Dr. Ferris disclosed that she is a consultant to and an investigator for DermTech and Scibase. She is also an investigator for Castle Biosciences.

SDEF and this news organization are owned by the same parent company. Dr. Ferris spoke during a forum on cutaneous malignancies at the meeting.

– The incidence of melanoma in the United States continues to increase, yet mortality from the disease has been stable and may even be starting to decline, according to data from the National Cancer Institute’s Surveillance, Epidemiology, and End Results (SEER) program.

Director of clinical trials at the University of Pittsburgh Medical Center’s Department of Dermatology.
Dr. Laura Ferris

At the Skin Disease Education Foundation’s annual Las Vegas Dermatology Seminar, Laura Korb Ferris, MD, PhD, said that SEER data project 96,480 new cases of melanoma in 2019, as well as 7,230 deaths from the disease. In 2016, SEER projected 10,130 deaths from melanoma, “so we’re actually projecting a reduction in melanoma deaths,” said Dr. Ferris, director of clinical trials at the University of Pittsburgh Medical Center’s department of dermatology. She added that the death rate from melanoma in 2016 was 2.17 per 100,000 population, a reduction from 2.69 per 100,000 population in 2011, “so it looks like melanoma mortality may be stable,” or even reduced, despite an increase in melanoma incidence.

A study of SEER data between 1989 and 2009 found that melanoma incidence is increasing across all lesion thicknesses (J Natl Cancer Inst. 2015 Nov 12. doi: 10.1093/jnci/djv294). Specifically, the incidence increased most among thin lesions, but there was a smaller increased incidence of thick melanoma. “This suggests that the overall burden of disease is truly increasing, but it is primarily stemming from an increase in T1/T2 disease,” Dr. Ferris said. “This could be due in part to increased early detection.”

Improvements in melanoma-specific survival, she continued, are likely a combination of improved management of T4 disease, a shift toward detection of thinner T1/T2 melanoma, and increased detection of T1/T2 disease.

The SEER data also showed that the incidence of fatal cases of melanoma has decreased since 1989, but only in thick melanomas. This trend may indicate a modest improvement in the management of T4 tumors. “Optimistically, I think increased detection efforts are improving survival by early detection of thin but ultimately fatal melanomas,” Dr. Ferris said. “Hopefully we are finding disease earlier and we are preventing patients from progressing to these fatal T4 melanomas.”

Disparities in melanoma-specific survival also come into play. Men have poorer survival compared with women, whites have the highest survival, and non-Hispanic whites have a better survival than Hispanic whites, Dr. Ferris said, while lower rates of survival are seen in blacks and nonblack minorities, as well as among those in high poverty and those who are separated/nonmarried. Lesion type also matters. The highest survival is seen in those with superficial spreading melanoma, while lower survival is observed in those with nodular melanoma, and acral lentiginous melanoma.

 

 


Early detection of thin nodular melanomas has the potential to significantly impact melanoma mortality, “but we want to keep in mind that the majority of ultimately fatal melanomas are superficial spreading melanomas,” Dr. Ferris said. “That is because they are so much more prevalent. As a dermatologist, I think a lot about screening and early detection. Periodic screening is a good strategy for a slower-growing superficial spreading melanoma, but it’s not necessarily a good strategy for a rapidly growing nodular melanoma. That’s going to require better education and better access to health care.”



Self-detection of melanoma is another strategy to consider. According to Dr. Ferris, results from multiple studies suggest that about 50% of all melanomas are detected by patients, but the ones they find tend to be thicker than the ones that clinicians detect during office visits. “It would be great if we can get that number higher than 50%,” Dr. Ferris said. “If patients really understood what melanoma is, what it looks like, and when they needed to seek medical attention, perhaps we could get that over 50% and see self-detection of thinner melanomas. That’s a very low-cost intervention.”

Targeted screening efforts that stratify by risk factors and by age “makes screening more efficient and more cost-effective,” she added. She cited one analysis, which found that clinicians need to screen 606 people and conduct 25 biopsies in order to find one melanoma. “That’s very resource intensive,” she said. “However, if you only screened people 50 or older or 65 or older, the number needed to screen goes down, and because your pretest probability is higher, your number need to biopsy goes down as well. If you factor in things like a history of atypical nevi or a personal history of melanoma, those patients are at a higher risk of developing melanoma.”

Dr. Ferris closed her presentation by noting that Australia leads other countries in melanoma prevention efforts. There, the combined incidence of skin cancer is higher than the incidence of any other type of cancer. Four decades ago, Australian health officials launched SunSmart, a series of initiatives intended to reduce skin cancer. These include implementation of policies for hat wearing and shade provision in schools and at work, availability of more effective sunscreens, inclusion of sun protection items as a tax-deductible expense for outdoor workers, increased availability since the 1980s of long-sleeved sun protective swimwear, a ban on the use of indoor tanning since 2014, provision of UV forecasts in weather, and a comprehensive program of grants for community shade structures (PLoSMed. 2019 Oct 8;16[10]:e1002932).

“One approach to melanoma prevention won’t fit all,” she concluded. “We need to focus on prevention, public education to improve knowledge and self-detection.”

Dr. Ferris disclosed that she is a consultant to and an investigator for DermTech and Scibase. She is also an investigator for Castle Biosciences.

SDEF and this news organization are owned by the same parent company. Dr. Ferris spoke during a forum on cutaneous malignancies at the meeting.

Publications
Publications
Topics
Article Type
Sections
Article Source

EXPERT ANALYSIS FROM THE SDEF LAS VEGAS DERMATOLOGY SEMINAR

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

New models predict post-op pain in TKA

Article Type
Changed
Fri, 11/15/2019 - 09:13

 

Researchers have developed models that successfully predict persistent postoperative pain (PPP) after total knee arthroplasty (TKA) two-thirds of the time. Major risk factors include pre-operative pain, sensory testing results, anxiety and anticipated pain.

“The results of this study provide some basis for the identification of patients at risk of PPP after TKA and highlight several modifiable factors that may be targeted by clinicians in an attempt to reduce the risk of developing PPP,” write the authors of the study, which appeared in the British Journal of Anaesthesia.

The authors, led by David Rice, PhD, of Auckland University of Technology, note that moderate to severe levels of PPP affect an estimated 10%-34% of patients at least 3 months after TKA surgery. “PPP adversely affects quality of life, is the most important predictor of patient dissatisfaction after TKA, and is a common reason for undergoing revision surgery.”

The researchers, who launched the study to gain insight into the risk factors that can predict PPP, recruited 300 New Zealand volunteers (average age = 69, 48% female, 92% white, average body mass index [BMI] = 31 kg/m2) to be surveyed before and after TKA surgery. They monitored pain and tracked a long list of possible risk factors including psychological traits (such as anxiety, pain catastrophizing and depression), physical traits (such as gender, BMI), and surgical traits (such as total surgery time).

At 6 months, 21% of 291 patients reported moderate to severe pain, and the percentage fell to 16% in 288 patients at 12 months.

The researchers developed two models that successfully predicted moderate-to-severe PPP.

The 6-month model relied on higher levels of preoperative pain intensity, temporal summation (a statistic that’s based on quantitative sensory testing), trait anxiety (a measure of individual anxiety level), and expected pain. It correctly predicted moderate to severe PPP 66% of the time (area under the curve [AUC] = 0.70, sensitivity = 0.72, specificity = 0.64).

The 12-month model relied on higher levels of all the risk factors except for temporal summation and correctly predicted moderate-to-severe PPP 66% of the time (AUC = 0.66, sensitivity = 0.61, specificity = 0.67).

The researchers noted that other research has linked trait anxiety and expected pain to PPP. In regard to anxiety, “cognitive behavioral interventions in the perioperative period aimed at reducing the threat value of surgery and of postoperative pain, improving patients’ coping strategies, and enhancing self-efficacy might help to reduce the risk of PPP after TKA,” the researchers write. “Furthermore, there is some evidence that anxiolytic medications can diminish perioperative anxiety and reduce APOP [acute postoperative pain] although its effects on PPP are unclear.”

Moving forward, the authors write, “strategies to minimize intraoperative nerve injury, reduce preoperative pain intensity, and address preoperative psychological factors such as expected pain and anxiety may lead to improved outcomes after TKA and should be explored.”

The Australia New Zealand College of Anesthetists and Auckland University of Technology funded the study. The study authors report no relevant disclosures.

SOURCE: Rice D et al. Br J Anaesth 2018;804-12. doi: https://doi.org/10.1016/j.bja.2018.05.070.

Publications
Topics
Sections

 

Researchers have developed models that successfully predict persistent postoperative pain (PPP) after total knee arthroplasty (TKA) two-thirds of the time. Major risk factors include pre-operative pain, sensory testing results, anxiety and anticipated pain.

“The results of this study provide some basis for the identification of patients at risk of PPP after TKA and highlight several modifiable factors that may be targeted by clinicians in an attempt to reduce the risk of developing PPP,” write the authors of the study, which appeared in the British Journal of Anaesthesia.

The authors, led by David Rice, PhD, of Auckland University of Technology, note that moderate to severe levels of PPP affect an estimated 10%-34% of patients at least 3 months after TKA surgery. “PPP adversely affects quality of life, is the most important predictor of patient dissatisfaction after TKA, and is a common reason for undergoing revision surgery.”

The researchers, who launched the study to gain insight into the risk factors that can predict PPP, recruited 300 New Zealand volunteers (average age = 69, 48% female, 92% white, average body mass index [BMI] = 31 kg/m2) to be surveyed before and after TKA surgery. They monitored pain and tracked a long list of possible risk factors including psychological traits (such as anxiety, pain catastrophizing and depression), physical traits (such as gender, BMI), and surgical traits (such as total surgery time).

At 6 months, 21% of 291 patients reported moderate to severe pain, and the percentage fell to 16% in 288 patients at 12 months.

The researchers developed two models that successfully predicted moderate-to-severe PPP.

The 6-month model relied on higher levels of preoperative pain intensity, temporal summation (a statistic that’s based on quantitative sensory testing), trait anxiety (a measure of individual anxiety level), and expected pain. It correctly predicted moderate to severe PPP 66% of the time (area under the curve [AUC] = 0.70, sensitivity = 0.72, specificity = 0.64).

The 12-month model relied on higher levels of all the risk factors except for temporal summation and correctly predicted moderate-to-severe PPP 66% of the time (AUC = 0.66, sensitivity = 0.61, specificity = 0.67).

The researchers noted that other research has linked trait anxiety and expected pain to PPP. In regard to anxiety, “cognitive behavioral interventions in the perioperative period aimed at reducing the threat value of surgery and of postoperative pain, improving patients’ coping strategies, and enhancing self-efficacy might help to reduce the risk of PPP after TKA,” the researchers write. “Furthermore, there is some evidence that anxiolytic medications can diminish perioperative anxiety and reduce APOP [acute postoperative pain] although its effects on PPP are unclear.”

Moving forward, the authors write, “strategies to minimize intraoperative nerve injury, reduce preoperative pain intensity, and address preoperative psychological factors such as expected pain and anxiety may lead to improved outcomes after TKA and should be explored.”

The Australia New Zealand College of Anesthetists and Auckland University of Technology funded the study. The study authors report no relevant disclosures.

SOURCE: Rice D et al. Br J Anaesth 2018;804-12. doi: https://doi.org/10.1016/j.bja.2018.05.070.

 

Researchers have developed models that successfully predict persistent postoperative pain (PPP) after total knee arthroplasty (TKA) two-thirds of the time. Major risk factors include pre-operative pain, sensory testing results, anxiety and anticipated pain.

“The results of this study provide some basis for the identification of patients at risk of PPP after TKA and highlight several modifiable factors that may be targeted by clinicians in an attempt to reduce the risk of developing PPP,” write the authors of the study, which appeared in the British Journal of Anaesthesia.

The authors, led by David Rice, PhD, of Auckland University of Technology, note that moderate to severe levels of PPP affect an estimated 10%-34% of patients at least 3 months after TKA surgery. “PPP adversely affects quality of life, is the most important predictor of patient dissatisfaction after TKA, and is a common reason for undergoing revision surgery.”

The researchers, who launched the study to gain insight into the risk factors that can predict PPP, recruited 300 New Zealand volunteers (average age = 69, 48% female, 92% white, average body mass index [BMI] = 31 kg/m2) to be surveyed before and after TKA surgery. They monitored pain and tracked a long list of possible risk factors including psychological traits (such as anxiety, pain catastrophizing and depression), physical traits (such as gender, BMI), and surgical traits (such as total surgery time).

At 6 months, 21% of 291 patients reported moderate to severe pain, and the percentage fell to 16% in 288 patients at 12 months.

The researchers developed two models that successfully predicted moderate-to-severe PPP.

The 6-month model relied on higher levels of preoperative pain intensity, temporal summation (a statistic that’s based on quantitative sensory testing), trait anxiety (a measure of individual anxiety level), and expected pain. It correctly predicted moderate to severe PPP 66% of the time (area under the curve [AUC] = 0.70, sensitivity = 0.72, specificity = 0.64).

The 12-month model relied on higher levels of all the risk factors except for temporal summation and correctly predicted moderate-to-severe PPP 66% of the time (AUC = 0.66, sensitivity = 0.61, specificity = 0.67).

The researchers noted that other research has linked trait anxiety and expected pain to PPP. In regard to anxiety, “cognitive behavioral interventions in the perioperative period aimed at reducing the threat value of surgery and of postoperative pain, improving patients’ coping strategies, and enhancing self-efficacy might help to reduce the risk of PPP after TKA,” the researchers write. “Furthermore, there is some evidence that anxiolytic medications can diminish perioperative anxiety and reduce APOP [acute postoperative pain] although its effects on PPP are unclear.”

Moving forward, the authors write, “strategies to minimize intraoperative nerve injury, reduce preoperative pain intensity, and address preoperative psychological factors such as expected pain and anxiety may lead to improved outcomes after TKA and should be explored.”

The Australia New Zealand College of Anesthetists and Auckland University of Technology funded the study. The study authors report no relevant disclosures.

SOURCE: Rice D et al. Br J Anaesth 2018;804-12. doi: https://doi.org/10.1016/j.bja.2018.05.070.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM BRITISH JOURNAL OF ANESTHESIA

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Poll: Clostridium difficile

Article Type
Changed
Wed, 11/20/2019 - 15:05
Display Headline
Poll: Clostridium difficile

Choose your answer in the poll below. To check the accuracy of your answer, see PURLs: Do Probiotics Reduce C diff Risk in Hospitalized Patients?

[polldaddy:10452484]

 

Click on page 2 below to find out what the correct answer is...

 

 

The correct answer is a.) 1 to 2

To learn more, see this month's PURLs: Do Probiotics Reduce C diff Risk in Hospitalized Patients?

Issue
Clinician Reviews - 29(11)
Publications
Topics

Choose your answer in the poll below. To check the accuracy of your answer, see PURLs: Do Probiotics Reduce C diff Risk in Hospitalized Patients?

[polldaddy:10452484]

 

Click on page 2 below to find out what the correct answer is...

 

 

The correct answer is a.) 1 to 2

To learn more, see this month's PURLs: Do Probiotics Reduce C diff Risk in Hospitalized Patients?

Choose your answer in the poll below. To check the accuracy of your answer, see PURLs: Do Probiotics Reduce C diff Risk in Hospitalized Patients?

[polldaddy:10452484]

 

Click on page 2 below to find out what the correct answer is...

 

 

The correct answer is a.) 1 to 2

To learn more, see this month's PURLs: Do Probiotics Reduce C diff Risk in Hospitalized Patients?

Issue
Clinician Reviews - 29(11)
Issue
Clinician Reviews - 29(11)
Publications
Publications
Topics
Article Type
Display Headline
Poll: Clostridium difficile
Display Headline
Poll: Clostridium difficile
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Thu, 11/07/2019 - 08:45
Un-Gate On Date
Thu, 11/07/2019 - 08:45
Use ProPublica
CFC Schedule Remove Status
Thu, 11/07/2019 - 08:45
Hide sidebar & use full width
render the right sidebar.

The Dog Can Stay, but the Rash Must Go

Article Type
Changed
Tue, 11/19/2019 - 11:45
Display Headline
The Dog Can Stay, but the Rash Must Go

A 50-year-old man presents with a 1-year history of an itchy, bumpy rash on his chest. He denies any history of similar rash and says there have been no “extraordinary changes” in his life that could have triggered this manifestation. Despite consulting various primary care providers, he has been unable to acquire either a definitive diagnosis or effective treatment.

The patient works exclusively in a climate-controlled office. Although there were no changes to laundry detergent, body soap, deodorant, or other products that might have precipitated the rash’s manifestation, he tried alternate products to see what effect they might have. Nothing beneficial came from these experiments. Similarly, the family dogs were temporarily “banished” with no improvement to his condition.

From the outset, the rash and the associated itching have been confined to the patient’s chest. No one else in his family is similarly affected.

The patient is otherwise quite well. He takes no prescription medications and denies any recent foreign travel.

Itchy, bumpy rash on chest

EXAMINATION
The papulovesicular rash is strikingly uniform. The patient’s entire chest is covered with tiny vesicles, many with clear fluid inside. The lesions average 1.2 to 2 mm in width, and nearly all are quite palpable. Each lesion is slightly erythematous but neither warm nor tender on palpation.

Examination of the rest of the patient’s exposed skin reveals no similar lesions. His back, hands, and genitals are notably free of any such lesions.

A shave biopsy is performed, utilizing a saucerization technique, and the specimen is submitted to pathology for routine processing. The report confirms the papulovesicular nature of the lesions—but more significantly, it shows consistent acantholysis (loss of intracellular connections between keratinocytes), along with focal lymphohistiocytic infiltrates.

What’s the diagnosis?

 

 

DISCUSSION
This is a classic presentation of Grover disease, also known as transient acantholytic dermatosis (AD). While not rare, it is seen only occasionally in dermatology practices. When it does walk through the door, it is twice as likely to be seen in a male than in a female patient and less commonly seen in those with darker skin.

AD is easy enough to diagnose clinically, without biopsy, particularly in classic cases such as this one. The distribution and morphology of the rash, as well as the gender and age of the patient, are all typical of this idiopathic condition. The biopsy results, besides being consistent with AD, did serve to rule out other items in the differential (eg, bacterial folliculitis, pemphigus, and acne).

Since AD was first described in 1974 by R.W. Grover, MD, much research has been conducted to flesh out the nature of the disease, its potential causes, and possible treatment. One certainty about so-called transient AD is that most cases are far from transient—in fact, they can last for a year or more. Attempts have been made to connect AD with internal disease (eg, occult malignancy) or even mercury exposure, but these theories have not been corroborated.

Consistent treatment success has also been elusive. Most patients achieve decent relief with the use of topical steroid creams, with or without the addition of anti-inflammatory medications (eg, doxycycline). Other options include isotretinoin and psoralen plus ultraviolet A (PUVA) photochemotherapy. Fortunately, most cases eventually clear up.

TAKE-HOME LEARNING POINTS

  • Grover disease, also known as transient acantholytic dermatosis (AD), usually manifests with an acute eruption of papulovesicular lesions.
  • AD lesions tend to be confined to the chest and are typically pruritic.
  • Clinical diagnosis is usually adequate, although biopsy, which will reveal typical findings of acantholysis, may be necessary to rule out other items in the differential.
  • Treatment with topical steroids, oral doxycycline, and “tincture of time” usually suffices, but resolution may take a year or more.
Publications
Topics
Sections

A 50-year-old man presents with a 1-year history of an itchy, bumpy rash on his chest. He denies any history of similar rash and says there have been no “extraordinary changes” in his life that could have triggered this manifestation. Despite consulting various primary care providers, he has been unable to acquire either a definitive diagnosis or effective treatment.

The patient works exclusively in a climate-controlled office. Although there were no changes to laundry detergent, body soap, deodorant, or other products that might have precipitated the rash’s manifestation, he tried alternate products to see what effect they might have. Nothing beneficial came from these experiments. Similarly, the family dogs were temporarily “banished” with no improvement to his condition.

From the outset, the rash and the associated itching have been confined to the patient’s chest. No one else in his family is similarly affected.

The patient is otherwise quite well. He takes no prescription medications and denies any recent foreign travel.

Itchy, bumpy rash on chest

EXAMINATION
The papulovesicular rash is strikingly uniform. The patient’s entire chest is covered with tiny vesicles, many with clear fluid inside. The lesions average 1.2 to 2 mm in width, and nearly all are quite palpable. Each lesion is slightly erythematous but neither warm nor tender on palpation.

Examination of the rest of the patient’s exposed skin reveals no similar lesions. His back, hands, and genitals are notably free of any such lesions.

A shave biopsy is performed, utilizing a saucerization technique, and the specimen is submitted to pathology for routine processing. The report confirms the papulovesicular nature of the lesions—but more significantly, it shows consistent acantholysis (loss of intracellular connections between keratinocytes), along with focal lymphohistiocytic infiltrates.

What’s the diagnosis?

 

 

DISCUSSION
This is a classic presentation of Grover disease, also known as transient acantholytic dermatosis (AD). While not rare, it is seen only occasionally in dermatology practices. When it does walk through the door, it is twice as likely to be seen in a male than in a female patient and less commonly seen in those with darker skin.

AD is easy enough to diagnose clinically, without biopsy, particularly in classic cases such as this one. The distribution and morphology of the rash, as well as the gender and age of the patient, are all typical of this idiopathic condition. The biopsy results, besides being consistent with AD, did serve to rule out other items in the differential (eg, bacterial folliculitis, pemphigus, and acne).

Since AD was first described in 1974 by R.W. Grover, MD, much research has been conducted to flesh out the nature of the disease, its potential causes, and possible treatment. One certainty about so-called transient AD is that most cases are far from transient—in fact, they can last for a year or more. Attempts have been made to connect AD with internal disease (eg, occult malignancy) or even mercury exposure, but these theories have not been corroborated.

Consistent treatment success has also been elusive. Most patients achieve decent relief with the use of topical steroid creams, with or without the addition of anti-inflammatory medications (eg, doxycycline). Other options include isotretinoin and psoralen plus ultraviolet A (PUVA) photochemotherapy. Fortunately, most cases eventually clear up.

TAKE-HOME LEARNING POINTS

  • Grover disease, also known as transient acantholytic dermatosis (AD), usually manifests with an acute eruption of papulovesicular lesions.
  • AD lesions tend to be confined to the chest and are typically pruritic.
  • Clinical diagnosis is usually adequate, although biopsy, which will reveal typical findings of acantholysis, may be necessary to rule out other items in the differential.
  • Treatment with topical steroids, oral doxycycline, and “tincture of time” usually suffices, but resolution may take a year or more.

A 50-year-old man presents with a 1-year history of an itchy, bumpy rash on his chest. He denies any history of similar rash and says there have been no “extraordinary changes” in his life that could have triggered this manifestation. Despite consulting various primary care providers, he has been unable to acquire either a definitive diagnosis or effective treatment.

The patient works exclusively in a climate-controlled office. Although there were no changes to laundry detergent, body soap, deodorant, or other products that might have precipitated the rash’s manifestation, he tried alternate products to see what effect they might have. Nothing beneficial came from these experiments. Similarly, the family dogs were temporarily “banished” with no improvement to his condition.

From the outset, the rash and the associated itching have been confined to the patient’s chest. No one else in his family is similarly affected.

The patient is otherwise quite well. He takes no prescription medications and denies any recent foreign travel.

Itchy, bumpy rash on chest

EXAMINATION
The papulovesicular rash is strikingly uniform. The patient’s entire chest is covered with tiny vesicles, many with clear fluid inside. The lesions average 1.2 to 2 mm in width, and nearly all are quite palpable. Each lesion is slightly erythematous but neither warm nor tender on palpation.

Examination of the rest of the patient’s exposed skin reveals no similar lesions. His back, hands, and genitals are notably free of any such lesions.

A shave biopsy is performed, utilizing a saucerization technique, and the specimen is submitted to pathology for routine processing. The report confirms the papulovesicular nature of the lesions—but more significantly, it shows consistent acantholysis (loss of intracellular connections between keratinocytes), along with focal lymphohistiocytic infiltrates.

What’s the diagnosis?

 

 

DISCUSSION
This is a classic presentation of Grover disease, also known as transient acantholytic dermatosis (AD). While not rare, it is seen only occasionally in dermatology practices. When it does walk through the door, it is twice as likely to be seen in a male than in a female patient and less commonly seen in those with darker skin.

AD is easy enough to diagnose clinically, without biopsy, particularly in classic cases such as this one. The distribution and morphology of the rash, as well as the gender and age of the patient, are all typical of this idiopathic condition. The biopsy results, besides being consistent with AD, did serve to rule out other items in the differential (eg, bacterial folliculitis, pemphigus, and acne).

Since AD was first described in 1974 by R.W. Grover, MD, much research has been conducted to flesh out the nature of the disease, its potential causes, and possible treatment. One certainty about so-called transient AD is that most cases are far from transient—in fact, they can last for a year or more. Attempts have been made to connect AD with internal disease (eg, occult malignancy) or even mercury exposure, but these theories have not been corroborated.

Consistent treatment success has also been elusive. Most patients achieve decent relief with the use of topical steroid creams, with or without the addition of anti-inflammatory medications (eg, doxycycline). Other options include isotretinoin and psoralen plus ultraviolet A (PUVA) photochemotherapy. Fortunately, most cases eventually clear up.

TAKE-HOME LEARNING POINTS

  • Grover disease, also known as transient acantholytic dermatosis (AD), usually manifests with an acute eruption of papulovesicular lesions.
  • AD lesions tend to be confined to the chest and are typically pruritic.
  • Clinical diagnosis is usually adequate, although biopsy, which will reveal typical findings of acantholysis, may be necessary to rule out other items in the differential.
  • Treatment with topical steroids, oral doxycycline, and “tincture of time” usually suffices, but resolution may take a year or more.
Publications
Publications
Topics
Article Type
Display Headline
The Dog Can Stay, but the Rash Must Go
Display Headline
The Dog Can Stay, but the Rash Must Go
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Thu, 11/07/2019 - 08:15
Un-Gate On Date
Thu, 11/07/2019 - 08:15
Use ProPublica
CFC Schedule Remove Status
Thu, 11/07/2019 - 08:15
Hide sidebar & use full width
render the right sidebar.

Red patches and thin plaques on feet

Article Type
Changed
Wed, 11/13/2019 - 11:57
Display Headline
Red patches and thin plaques on feet

Red patches and thin plaques on feet

The FP conducted a physical exam and noticed bilateral dorsal foot dermatitis with occasional small vesicles and lichenified papules, which was suggestive of chronic contact or irritant dermatitis. The patient’s favorite pair of boots offered another clue as to the most likely contact allergens. (The boots were leather, and leather is treated with tanning agents and dyes.) A biopsy was not performed but would be expected to show spongiosis with some degree of lichenification (thickening of the dermis)—a sign of the acute on chronic nature of this process. The diagnosis of irritant or allergic contact dermatitis was made empirically.

The differential diagnosis for rashes on the feet can be broad and includes common tinea pedis, pitted keratolysis, stasis dermatitis, psoriasis, eczemas of various types, keratoderma, and contact dermatitis.

Many patients misconstrue that materials they use every day are exempt from becoming allergens. In counseling patients about this, point out that contact allergens often arise from repeated exposure. For example, dentists often develop dental amalgam allergies, hair professionals develop hair dye allergies, and machinists commonly develop cutting oil allergies. These reactions can and do occur years into their use.

The patient was started on topical clobetasol 0.05% ointment bid for 3 weeks, which provided quick relief and cleared his feet of the patches and plaques. He continued to wear his boots until contact allergy patch testing was performed in the office over a series of 3 days. This revealed an allergy to chromium, a common leather tanning agent. The patient was advised to avoid leather products including jackets, car upholstery, and gloves. After he carefully chose different footwear without a leather insole or tongue, the patient required no further therapy and remained clear.

Photos and text for Photo Rounds Friday courtesy of Jonathan Karnes, MD (copyright retained).

Issue
The Journal of Family Practice - 68(9)
Publications
Topics
Sections

Red patches and thin plaques on feet

The FP conducted a physical exam and noticed bilateral dorsal foot dermatitis with occasional small vesicles and lichenified papules, which was suggestive of chronic contact or irritant dermatitis. The patient’s favorite pair of boots offered another clue as to the most likely contact allergens. (The boots were leather, and leather is treated with tanning agents and dyes.) A biopsy was not performed but would be expected to show spongiosis with some degree of lichenification (thickening of the dermis)—a sign of the acute on chronic nature of this process. The diagnosis of irritant or allergic contact dermatitis was made empirically.

The differential diagnosis for rashes on the feet can be broad and includes common tinea pedis, pitted keratolysis, stasis dermatitis, psoriasis, eczemas of various types, keratoderma, and contact dermatitis.

Many patients misconstrue that materials they use every day are exempt from becoming allergens. In counseling patients about this, point out that contact allergens often arise from repeated exposure. For example, dentists often develop dental amalgam allergies, hair professionals develop hair dye allergies, and machinists commonly develop cutting oil allergies. These reactions can and do occur years into their use.

The patient was started on topical clobetasol 0.05% ointment bid for 3 weeks, which provided quick relief and cleared his feet of the patches and plaques. He continued to wear his boots until contact allergy patch testing was performed in the office over a series of 3 days. This revealed an allergy to chromium, a common leather tanning agent. The patient was advised to avoid leather products including jackets, car upholstery, and gloves. After he carefully chose different footwear without a leather insole or tongue, the patient required no further therapy and remained clear.

Photos and text for Photo Rounds Friday courtesy of Jonathan Karnes, MD (copyright retained).

Red patches and thin plaques on feet

The FP conducted a physical exam and noticed bilateral dorsal foot dermatitis with occasional small vesicles and lichenified papules, which was suggestive of chronic contact or irritant dermatitis. The patient’s favorite pair of boots offered another clue as to the most likely contact allergens. (The boots were leather, and leather is treated with tanning agents and dyes.) A biopsy was not performed but would be expected to show spongiosis with some degree of lichenification (thickening of the dermis)—a sign of the acute on chronic nature of this process. The diagnosis of irritant or allergic contact dermatitis was made empirically.

The differential diagnosis for rashes on the feet can be broad and includes common tinea pedis, pitted keratolysis, stasis dermatitis, psoriasis, eczemas of various types, keratoderma, and contact dermatitis.

Many patients misconstrue that materials they use every day are exempt from becoming allergens. In counseling patients about this, point out that contact allergens often arise from repeated exposure. For example, dentists often develop dental amalgam allergies, hair professionals develop hair dye allergies, and machinists commonly develop cutting oil allergies. These reactions can and do occur years into their use.

The patient was started on topical clobetasol 0.05% ointment bid for 3 weeks, which provided quick relief and cleared his feet of the patches and plaques. He continued to wear his boots until contact allergy patch testing was performed in the office over a series of 3 days. This revealed an allergy to chromium, a common leather tanning agent. The patient was advised to avoid leather products including jackets, car upholstery, and gloves. After he carefully chose different footwear without a leather insole or tongue, the patient required no further therapy and remained clear.

Photos and text for Photo Rounds Friday courtesy of Jonathan Karnes, MD (copyright retained).

Issue
The Journal of Family Practice - 68(9)
Issue
The Journal of Family Practice - 68(9)
Publications
Publications
Topics
Article Type
Display Headline
Red patches and thin plaques on feet
Display Headline
Red patches and thin plaques on feet
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Tue, 11/05/2019 - 08:45
Un-Gate On Date
Tue, 11/05/2019 - 08:45
Use ProPublica
CFC Schedule Remove Status
Tue, 11/05/2019 - 08:45
Hide sidebar & use full width
render the right sidebar.

Though metastatic breast cancer survival is improving, rates vary by region

Article Type
Changed
Thu, 12/15/2022 - 17:40

 

Though survival rates of patients with metastatic breast cancer (MBC) have increased over the last 2 decades, a new study has indicated disparities exist across regions and by variables like age and race.

“It appears from these results that we may be at a crossroads for MBC treatment and survival,” wrote Judith A. Malmgren, PhD, of the University of Washington and her coauthors. The study was published in Cancer. “Access to appropriate, timely, and up‐to‐date diagnosis, care, treatment, and surveillance could turn this fatal disease into a chronic and treatable phenomenon, depending on patient factors, molecular subtype, and insurance capacity to pay for treatment,” they said.

To determine how breast cancer outcomes might vary across regions, the researchers compared breast cancer–specific survival rates (BCSS) from Surveillance, Epidemiology, and End Results-9 (SEER-9) registry data minus a regional subset from the Seattle-Puget Sound (S-PS) region (n = 12,121) to patients from that S-PS region (n = 1,931) and to an individual cohort in that area (n = 261). Five-year BCSS rates were calculated for three time periods: 1990‐1998, 1999‐2004, and 2005‐2011.

All analyzed patients were diagnosed with a first primary, de novo, stage IV breast cancer between the ages of 25 and 84 years from 1990 to 2011. Patients in the SEER-9 group and the S-PS region had a mean age of 61 years, compared with the individual cohort’s mean age of 55 years. Patients in the individual cohort were more likely to reside in a major metropolitan area of over 1 million people, compared with the SEER group and the S-PS region (86% versus 61% and 58%, respectively).

Patients in the SEER-9 group had improved BCSS rates over the study period, from 19% in 1990-1998 (95% confidence interval, 18%-21%; P less than .001) to 26% in 2005-2011 (95% CI, 24%-27%; P less than .001). Patients in the S-PS region saw even greater improvements in BCSS rates, from 21% in 1990-1998 (95% CI, 18%-24%; P less than .001) to 35% in 2005-2011 (95% CI, 32%-39%; P less than .001). But the largest improvement in survival rates came from patients in the individual cohort, who went from 29% in 1990-1998 (95% CI, 18%-37%; P less than .001) to 56% in 2005-2011 (95% CI, 45%-65%; P = .004).

In a proportional hazards model for breast cancer–specific death, reduced hazard in the SEER-9 group was associated with surgery (hazard ratio, 0.58; 95% CI, 0.55-0.61; P less than .001), an age less than 70 (HR, 0.77; 95% CI, 0.73-0.82; P less than .001) and white race (HR, 0.84; 95% CI, 0.79-0.89; P less than .001). Similar associations were seen in the S-PS region with surgery (HR, 0.57; 95% CI, 0.50-0.66; P less than .001) and an age less than 70 (HR, 0.72; 95% CI, 0.62-0.84; P less than .001), but not white race.

The study results “indicate that the stage IV population that is living longer may be benefiting from many of the same therapies used to treat early breast cancer, especially for patients who are able to handle adjuvant chemotherapy treatment and are HR‐positive,” the researchers said. “However, the lag in survival improvement across different population‐based, geographic regions suggests that some groups and regions may benefit unequally from treatment advances as well as timely diagnosis.”

The study was funded by the Kaplan Cancer Research Fund, the Metastatic Breast Cancer Alliance, and the Surveillance, Epidemiology, and End Results Cancer Surveillance System program of the National Cancer Institute. The authors reported no conflicts of interest.

SOURCE: Malmgren JA et al. Cancer. 2019 Oct 22. doi: 10.1002/cncr.32531.

Publications
Topics
Sections

 

Though survival rates of patients with metastatic breast cancer (MBC) have increased over the last 2 decades, a new study has indicated disparities exist across regions and by variables like age and race.

“It appears from these results that we may be at a crossroads for MBC treatment and survival,” wrote Judith A. Malmgren, PhD, of the University of Washington and her coauthors. The study was published in Cancer. “Access to appropriate, timely, and up‐to‐date diagnosis, care, treatment, and surveillance could turn this fatal disease into a chronic and treatable phenomenon, depending on patient factors, molecular subtype, and insurance capacity to pay for treatment,” they said.

To determine how breast cancer outcomes might vary across regions, the researchers compared breast cancer–specific survival rates (BCSS) from Surveillance, Epidemiology, and End Results-9 (SEER-9) registry data minus a regional subset from the Seattle-Puget Sound (S-PS) region (n = 12,121) to patients from that S-PS region (n = 1,931) and to an individual cohort in that area (n = 261). Five-year BCSS rates were calculated for three time periods: 1990‐1998, 1999‐2004, and 2005‐2011.

All analyzed patients were diagnosed with a first primary, de novo, stage IV breast cancer between the ages of 25 and 84 years from 1990 to 2011. Patients in the SEER-9 group and the S-PS region had a mean age of 61 years, compared with the individual cohort’s mean age of 55 years. Patients in the individual cohort were more likely to reside in a major metropolitan area of over 1 million people, compared with the SEER group and the S-PS region (86% versus 61% and 58%, respectively).

Patients in the SEER-9 group had improved BCSS rates over the study period, from 19% in 1990-1998 (95% confidence interval, 18%-21%; P less than .001) to 26% in 2005-2011 (95% CI, 24%-27%; P less than .001). Patients in the S-PS region saw even greater improvements in BCSS rates, from 21% in 1990-1998 (95% CI, 18%-24%; P less than .001) to 35% in 2005-2011 (95% CI, 32%-39%; P less than .001). But the largest improvement in survival rates came from patients in the individual cohort, who went from 29% in 1990-1998 (95% CI, 18%-37%; P less than .001) to 56% in 2005-2011 (95% CI, 45%-65%; P = .004).

In a proportional hazards model for breast cancer–specific death, reduced hazard in the SEER-9 group was associated with surgery (hazard ratio, 0.58; 95% CI, 0.55-0.61; P less than .001), an age less than 70 (HR, 0.77; 95% CI, 0.73-0.82; P less than .001) and white race (HR, 0.84; 95% CI, 0.79-0.89; P less than .001). Similar associations were seen in the S-PS region with surgery (HR, 0.57; 95% CI, 0.50-0.66; P less than .001) and an age less than 70 (HR, 0.72; 95% CI, 0.62-0.84; P less than .001), but not white race.

The study results “indicate that the stage IV population that is living longer may be benefiting from many of the same therapies used to treat early breast cancer, especially for patients who are able to handle adjuvant chemotherapy treatment and are HR‐positive,” the researchers said. “However, the lag in survival improvement across different population‐based, geographic regions suggests that some groups and regions may benefit unequally from treatment advances as well as timely diagnosis.”

The study was funded by the Kaplan Cancer Research Fund, the Metastatic Breast Cancer Alliance, and the Surveillance, Epidemiology, and End Results Cancer Surveillance System program of the National Cancer Institute. The authors reported no conflicts of interest.

SOURCE: Malmgren JA et al. Cancer. 2019 Oct 22. doi: 10.1002/cncr.32531.

 

Though survival rates of patients with metastatic breast cancer (MBC) have increased over the last 2 decades, a new study has indicated disparities exist across regions and by variables like age and race.

“It appears from these results that we may be at a crossroads for MBC treatment and survival,” wrote Judith A. Malmgren, PhD, of the University of Washington and her coauthors. The study was published in Cancer. “Access to appropriate, timely, and up‐to‐date diagnosis, care, treatment, and surveillance could turn this fatal disease into a chronic and treatable phenomenon, depending on patient factors, molecular subtype, and insurance capacity to pay for treatment,” they said.

To determine how breast cancer outcomes might vary across regions, the researchers compared breast cancer–specific survival rates (BCSS) from Surveillance, Epidemiology, and End Results-9 (SEER-9) registry data minus a regional subset from the Seattle-Puget Sound (S-PS) region (n = 12,121) to patients from that S-PS region (n = 1,931) and to an individual cohort in that area (n = 261). Five-year BCSS rates were calculated for three time periods: 1990‐1998, 1999‐2004, and 2005‐2011.

All analyzed patients were diagnosed with a first primary, de novo, stage IV breast cancer between the ages of 25 and 84 years from 1990 to 2011. Patients in the SEER-9 group and the S-PS region had a mean age of 61 years, compared with the individual cohort’s mean age of 55 years. Patients in the individual cohort were more likely to reside in a major metropolitan area of over 1 million people, compared with the SEER group and the S-PS region (86% versus 61% and 58%, respectively).

Patients in the SEER-9 group had improved BCSS rates over the study period, from 19% in 1990-1998 (95% confidence interval, 18%-21%; P less than .001) to 26% in 2005-2011 (95% CI, 24%-27%; P less than .001). Patients in the S-PS region saw even greater improvements in BCSS rates, from 21% in 1990-1998 (95% CI, 18%-24%; P less than .001) to 35% in 2005-2011 (95% CI, 32%-39%; P less than .001). But the largest improvement in survival rates came from patients in the individual cohort, who went from 29% in 1990-1998 (95% CI, 18%-37%; P less than .001) to 56% in 2005-2011 (95% CI, 45%-65%; P = .004).

In a proportional hazards model for breast cancer–specific death, reduced hazard in the SEER-9 group was associated with surgery (hazard ratio, 0.58; 95% CI, 0.55-0.61; P less than .001), an age less than 70 (HR, 0.77; 95% CI, 0.73-0.82; P less than .001) and white race (HR, 0.84; 95% CI, 0.79-0.89; P less than .001). Similar associations were seen in the S-PS region with surgery (HR, 0.57; 95% CI, 0.50-0.66; P less than .001) and an age less than 70 (HR, 0.72; 95% CI, 0.62-0.84; P less than .001), but not white race.

The study results “indicate that the stage IV population that is living longer may be benefiting from many of the same therapies used to treat early breast cancer, especially for patients who are able to handle adjuvant chemotherapy treatment and are HR‐positive,” the researchers said. “However, the lag in survival improvement across different population‐based, geographic regions suggests that some groups and regions may benefit unequally from treatment advances as well as timely diagnosis.”

The study was funded by the Kaplan Cancer Research Fund, the Metastatic Breast Cancer Alliance, and the Surveillance, Epidemiology, and End Results Cancer Surveillance System program of the National Cancer Institute. The authors reported no conflicts of interest.

SOURCE: Malmgren JA et al. Cancer. 2019 Oct 22. doi: 10.1002/cncr.32531.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CANCER

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Nivolumab benefit for NSCLC persists at 5-year follow-up

Article Type
Changed
Wed, 11/06/2019 - 16:51

 

– Nivolumab, compared with docetaxel chemotherapy, led to a fivefold improvement in 5-year overall survival among previously treated patients with non–small cell lung cancer (NSCLC), according to a pooled analysis of data from the phase 3 CheckMate 017 and 057 trials.

Dr. Scott Gettinger, a professor at the Yale Comprehensive Cancer Center, New Haven, Ct.
Sharon Worcester/MDedge News
Dr. Scott Gettinger

The 5-year overall survival (OS) rates from the two randomized registrational trials, which established the programmed death-1 (PD-1) inhibitor nivolumab as the standard salvage therapy for NSCLC, were 13.4% vs. 2.6% (median, 11.1 vs. 8.1 months) with nivolumab and docetaxel, respectively, Scott Gettinger, MD, reported at the World Conference on Lung Cancer.

“These are the first randomized trials to report 5-year outcomes for a PD-1 axis inhibitor in patients with previously treated advanced non–small lung cancer,” said Dr. Gettinger, a professor at the Yale Comprehensive Cancer Center, New Haven, Conn. “This is really unprecedented; we wouldn’t expect many patients to be out 5 years in this scenario.”

Notably, the 5-year OS benefit was seen in both trials, he said, explaining that each compared nivolumab and docetaxel, but CheckMate 017 included patients with only squamous NSCLC, and CheckMate 057 included only non–squamous NSCLC patients.

The trials randomized 272 and 582 patients, respectively, and both demonstrated significantly improved 12-month OS with nivolumab – regardless of programmed death-ligand 1 (PD-L1) expression levels. Common eligibility criteria included stage IIIb/IV disease, good performance status (ECOG performance score of 0-1), and 1 prior platinum-based chemotherapy; CheckMate 057 further allowed prior tyrosine kinase inhibitor treatment for known anaplastic lymphoma kinase (ALK) translocation or epidermal growth factor receptor (EGFR) mutation, and allowed prior maintenance therapy. Doses in both trials were 3 mg/kg of nivolumab every 2 weeks or 75 mg/m2 of intravenous docetaxel every 3 weeks until disease progression or unacceptable toxicity.

The pooled data also showed an improvement in progression-free survival (PFS) at 5 years (8% vs. 0%) with nivolumab vs. docetaxel groups.

“Again, we don’t see this in trials – more commonly we see zero patients without progression, and that’s what we saw with the docetaxel arm,” said Dr. Gettinger, who also is the Disease Aligned Research Team Leader, Thoracic Oncology Program, at the cancer center.



The median duration of responses with nivolumab was 19.9 months vs. 5.6 months with docetaxel, and 32.2% of nivolumab responders were still without progression at 5 years, he noted.

A common question in the clinic relates to the prognosis in patients who do well with PD-1 axis inhibitors, which prompted an additional analysis across the two trials, he said, noting that 60%, 78%, and 88% of patients who had not progressed at 2, 3, or 4 years, respectively, also had not progressed at 5 years, and 80%, 93%, and 100%, of patients in those groups were alive at 5 years. In the docetaxel arm, only 4, 1, and 0 patients had PFS at 2, 3, and 4, years, respectively, and none of those patients survived to 5 years, he said.

No new safety signals were seen with long-term follow-up, he added.

“In fact there was only one grade 3 or higher toxicity that was related to treatment in the nivolumab arm, and this was a grade 3 lipase elevation. There was one patient who discontinued nivolumab after 3 years, and this was for a grade 2 rash and eczema that had waxed and waned since starting nivolumab,” he said.

Also of note, 10% of nivolumab-treated patients who were off treatment at 5 years – for variable periods of time – had not progressed and had not received subsequent therapy.

“So we clearly see benefit in our patients long after they finish a course or stop for some reason,” he said.

CheckMate 017 and 057 were funded by Bristol-Myers Squibb. Dr. Gettinger reported advisory board and/or consulting work for, and/or research funding from Bristol-Myers Squibb, Nektar Therapeutics, Genentech/Roche, Iovance, and Takeda/Ariad.

SOURCE: Gettinger S et al. WCLC 2019, Abstract PR04.03.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Nivolumab, compared with docetaxel chemotherapy, led to a fivefold improvement in 5-year overall survival among previously treated patients with non–small cell lung cancer (NSCLC), according to a pooled analysis of data from the phase 3 CheckMate 017 and 057 trials.

Dr. Scott Gettinger, a professor at the Yale Comprehensive Cancer Center, New Haven, Ct.
Sharon Worcester/MDedge News
Dr. Scott Gettinger

The 5-year overall survival (OS) rates from the two randomized registrational trials, which established the programmed death-1 (PD-1) inhibitor nivolumab as the standard salvage therapy for NSCLC, were 13.4% vs. 2.6% (median, 11.1 vs. 8.1 months) with nivolumab and docetaxel, respectively, Scott Gettinger, MD, reported at the World Conference on Lung Cancer.

“These are the first randomized trials to report 5-year outcomes for a PD-1 axis inhibitor in patients with previously treated advanced non–small lung cancer,” said Dr. Gettinger, a professor at the Yale Comprehensive Cancer Center, New Haven, Conn. “This is really unprecedented; we wouldn’t expect many patients to be out 5 years in this scenario.”

Notably, the 5-year OS benefit was seen in both trials, he said, explaining that each compared nivolumab and docetaxel, but CheckMate 017 included patients with only squamous NSCLC, and CheckMate 057 included only non–squamous NSCLC patients.

The trials randomized 272 and 582 patients, respectively, and both demonstrated significantly improved 12-month OS with nivolumab – regardless of programmed death-ligand 1 (PD-L1) expression levels. Common eligibility criteria included stage IIIb/IV disease, good performance status (ECOG performance score of 0-1), and 1 prior platinum-based chemotherapy; CheckMate 057 further allowed prior tyrosine kinase inhibitor treatment for known anaplastic lymphoma kinase (ALK) translocation or epidermal growth factor receptor (EGFR) mutation, and allowed prior maintenance therapy. Doses in both trials were 3 mg/kg of nivolumab every 2 weeks or 75 mg/m2 of intravenous docetaxel every 3 weeks until disease progression or unacceptable toxicity.

The pooled data also showed an improvement in progression-free survival (PFS) at 5 years (8% vs. 0%) with nivolumab vs. docetaxel groups.

“Again, we don’t see this in trials – more commonly we see zero patients without progression, and that’s what we saw with the docetaxel arm,” said Dr. Gettinger, who also is the Disease Aligned Research Team Leader, Thoracic Oncology Program, at the cancer center.



The median duration of responses with nivolumab was 19.9 months vs. 5.6 months with docetaxel, and 32.2% of nivolumab responders were still without progression at 5 years, he noted.

A common question in the clinic relates to the prognosis in patients who do well with PD-1 axis inhibitors, which prompted an additional analysis across the two trials, he said, noting that 60%, 78%, and 88% of patients who had not progressed at 2, 3, or 4 years, respectively, also had not progressed at 5 years, and 80%, 93%, and 100%, of patients in those groups were alive at 5 years. In the docetaxel arm, only 4, 1, and 0 patients had PFS at 2, 3, and 4, years, respectively, and none of those patients survived to 5 years, he said.

No new safety signals were seen with long-term follow-up, he added.

“In fact there was only one grade 3 or higher toxicity that was related to treatment in the nivolumab arm, and this was a grade 3 lipase elevation. There was one patient who discontinued nivolumab after 3 years, and this was for a grade 2 rash and eczema that had waxed and waned since starting nivolumab,” he said.

Also of note, 10% of nivolumab-treated patients who were off treatment at 5 years – for variable periods of time – had not progressed and had not received subsequent therapy.

“So we clearly see benefit in our patients long after they finish a course or stop for some reason,” he said.

CheckMate 017 and 057 were funded by Bristol-Myers Squibb. Dr. Gettinger reported advisory board and/or consulting work for, and/or research funding from Bristol-Myers Squibb, Nektar Therapeutics, Genentech/Roche, Iovance, and Takeda/Ariad.

SOURCE: Gettinger S et al. WCLC 2019, Abstract PR04.03.

 

– Nivolumab, compared with docetaxel chemotherapy, led to a fivefold improvement in 5-year overall survival among previously treated patients with non–small cell lung cancer (NSCLC), according to a pooled analysis of data from the phase 3 CheckMate 017 and 057 trials.

Dr. Scott Gettinger, a professor at the Yale Comprehensive Cancer Center, New Haven, Ct.
Sharon Worcester/MDedge News
Dr. Scott Gettinger

The 5-year overall survival (OS) rates from the two randomized registrational trials, which established the programmed death-1 (PD-1) inhibitor nivolumab as the standard salvage therapy for NSCLC, were 13.4% vs. 2.6% (median, 11.1 vs. 8.1 months) with nivolumab and docetaxel, respectively, Scott Gettinger, MD, reported at the World Conference on Lung Cancer.

“These are the first randomized trials to report 5-year outcomes for a PD-1 axis inhibitor in patients with previously treated advanced non–small lung cancer,” said Dr. Gettinger, a professor at the Yale Comprehensive Cancer Center, New Haven, Conn. “This is really unprecedented; we wouldn’t expect many patients to be out 5 years in this scenario.”

Notably, the 5-year OS benefit was seen in both trials, he said, explaining that each compared nivolumab and docetaxel, but CheckMate 017 included patients with only squamous NSCLC, and CheckMate 057 included only non–squamous NSCLC patients.

The trials randomized 272 and 582 patients, respectively, and both demonstrated significantly improved 12-month OS with nivolumab – regardless of programmed death-ligand 1 (PD-L1) expression levels. Common eligibility criteria included stage IIIb/IV disease, good performance status (ECOG performance score of 0-1), and 1 prior platinum-based chemotherapy; CheckMate 057 further allowed prior tyrosine kinase inhibitor treatment for known anaplastic lymphoma kinase (ALK) translocation or epidermal growth factor receptor (EGFR) mutation, and allowed prior maintenance therapy. Doses in both trials were 3 mg/kg of nivolumab every 2 weeks or 75 mg/m2 of intravenous docetaxel every 3 weeks until disease progression or unacceptable toxicity.

The pooled data also showed an improvement in progression-free survival (PFS) at 5 years (8% vs. 0%) with nivolumab vs. docetaxel groups.

“Again, we don’t see this in trials – more commonly we see zero patients without progression, and that’s what we saw with the docetaxel arm,” said Dr. Gettinger, who also is the Disease Aligned Research Team Leader, Thoracic Oncology Program, at the cancer center.



The median duration of responses with nivolumab was 19.9 months vs. 5.6 months with docetaxel, and 32.2% of nivolumab responders were still without progression at 5 years, he noted.

A common question in the clinic relates to the prognosis in patients who do well with PD-1 axis inhibitors, which prompted an additional analysis across the two trials, he said, noting that 60%, 78%, and 88% of patients who had not progressed at 2, 3, or 4 years, respectively, also had not progressed at 5 years, and 80%, 93%, and 100%, of patients in those groups were alive at 5 years. In the docetaxel arm, only 4, 1, and 0 patients had PFS at 2, 3, and 4, years, respectively, and none of those patients survived to 5 years, he said.

No new safety signals were seen with long-term follow-up, he added.

“In fact there was only one grade 3 or higher toxicity that was related to treatment in the nivolumab arm, and this was a grade 3 lipase elevation. There was one patient who discontinued nivolumab after 3 years, and this was for a grade 2 rash and eczema that had waxed and waned since starting nivolumab,” he said.

Also of note, 10% of nivolumab-treated patients who were off treatment at 5 years – for variable periods of time – had not progressed and had not received subsequent therapy.

“So we clearly see benefit in our patients long after they finish a course or stop for some reason,” he said.

CheckMate 017 and 057 were funded by Bristol-Myers Squibb. Dr. Gettinger reported advisory board and/or consulting work for, and/or research funding from Bristol-Myers Squibb, Nektar Therapeutics, Genentech/Roche, Iovance, and Takeda/Ariad.

SOURCE: Gettinger S et al. WCLC 2019, Abstract PR04.03.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM WCLC 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Case-control study IDs several novel risk factors of post-HCT melanoma

Article Type
Changed
Thu, 11/07/2019 - 14:34

Certain myeloablative conditioning regimens are among several novel risk factors for melanoma after allogeneic hematopoietic stem cell transplantation (HCT), according to findings from a nested case-control study.

The study included 140 cases of melanoma and 557 controls matched by age at HCT, sex, primary disease, and survival time. The results showed a significantly increased melanoma risk in HCT survivors who received total body irradiation–based myeloablative conditioning, reduced-intensity conditioning with melphalan, or reduced-intensity conditioning with fludarabine, compared with those who received busulfan-based myeloablative conditioning (odds ratios, 1.77, 2.60, and 2.72, respectively), Megan M. Herr, PhD, of the division of cancer epidemiology and genetics at the National Cancer Institute, and the Roswell Park Comprehensive Cancer Center, Buffalo, N.Y., and colleagues reported in the Journal of the American Academy of Dermatology.

Melanoma risk also was increased in patients who experienced acute graft-versus-host disease (GVHD) with stage 2 or greater skin involvement (OR, 1.92 vs. those with no acute GVHD), chronic GVHD without skin involvement (OR, 1.91 vs. those with no chronic GVHD), or keratinocytic carcinoma (OR, 2.37), and in those who resided in areas with higher ambient ultraviolet radiation (OR for the highest vs. lowest tertile, 1.64).

The UV radiation finding was more pronounced for melanomas occurring 6 or more years after transplant (OR, 3.04 for highest vs. lowest tertile), whereas ambient UV radiation was not associated with melanomas occurring earlier (ORs, 1.37 for less than 3 years and 0.98 at 3-6 years), the investigators noted.

The findings, based on large-scale and detailed clinical data from the Center for International Blood and Marrow Transplant Research for HCT performed during 1985-2012, show that melanoma after HCT has a multifactorial etiology that includes patient-, transplant-, and posttransplant-related factors, they said, noting that the findings also underscore the importance of “prioritization of high-risk survivors for adherence to prevention and screening recommendations.”

Those recommendations call for routine skin examination and photoprotective precautions – particularly in HCT survivors at the highest risk – but studies of screening behaviors suggest that fewer than two-thirds of HCT survivors adhere to these recommendations, they said, concluding that further research on the cost-effectiveness of melanoma screening is warranted, as is investigation into whether current approaches are associated with melanoma risk.

This work was supported by the intramural research program of the National Cancer Institute, the National Institutes of Health, and the Department of Health & Human Services. The authors reported having no conflicts of interest.

SOURCE: Herr MM et al. J Am Acad Dermatol. 2019 Oct 22. doi: 10.1016/j.jaad.2019.10.034.




 

Publications
Topics
Sections

Certain myeloablative conditioning regimens are among several novel risk factors for melanoma after allogeneic hematopoietic stem cell transplantation (HCT), according to findings from a nested case-control study.

The study included 140 cases of melanoma and 557 controls matched by age at HCT, sex, primary disease, and survival time. The results showed a significantly increased melanoma risk in HCT survivors who received total body irradiation–based myeloablative conditioning, reduced-intensity conditioning with melphalan, or reduced-intensity conditioning with fludarabine, compared with those who received busulfan-based myeloablative conditioning (odds ratios, 1.77, 2.60, and 2.72, respectively), Megan M. Herr, PhD, of the division of cancer epidemiology and genetics at the National Cancer Institute, and the Roswell Park Comprehensive Cancer Center, Buffalo, N.Y., and colleagues reported in the Journal of the American Academy of Dermatology.

Melanoma risk also was increased in patients who experienced acute graft-versus-host disease (GVHD) with stage 2 or greater skin involvement (OR, 1.92 vs. those with no acute GVHD), chronic GVHD without skin involvement (OR, 1.91 vs. those with no chronic GVHD), or keratinocytic carcinoma (OR, 2.37), and in those who resided in areas with higher ambient ultraviolet radiation (OR for the highest vs. lowest tertile, 1.64).

The UV radiation finding was more pronounced for melanomas occurring 6 or more years after transplant (OR, 3.04 for highest vs. lowest tertile), whereas ambient UV radiation was not associated with melanomas occurring earlier (ORs, 1.37 for less than 3 years and 0.98 at 3-6 years), the investigators noted.

The findings, based on large-scale and detailed clinical data from the Center for International Blood and Marrow Transplant Research for HCT performed during 1985-2012, show that melanoma after HCT has a multifactorial etiology that includes patient-, transplant-, and posttransplant-related factors, they said, noting that the findings also underscore the importance of “prioritization of high-risk survivors for adherence to prevention and screening recommendations.”

Those recommendations call for routine skin examination and photoprotective precautions – particularly in HCT survivors at the highest risk – but studies of screening behaviors suggest that fewer than two-thirds of HCT survivors adhere to these recommendations, they said, concluding that further research on the cost-effectiveness of melanoma screening is warranted, as is investigation into whether current approaches are associated with melanoma risk.

This work was supported by the intramural research program of the National Cancer Institute, the National Institutes of Health, and the Department of Health & Human Services. The authors reported having no conflicts of interest.

SOURCE: Herr MM et al. J Am Acad Dermatol. 2019 Oct 22. doi: 10.1016/j.jaad.2019.10.034.




 

Certain myeloablative conditioning regimens are among several novel risk factors for melanoma after allogeneic hematopoietic stem cell transplantation (HCT), according to findings from a nested case-control study.

The study included 140 cases of melanoma and 557 controls matched by age at HCT, sex, primary disease, and survival time. The results showed a significantly increased melanoma risk in HCT survivors who received total body irradiation–based myeloablative conditioning, reduced-intensity conditioning with melphalan, or reduced-intensity conditioning with fludarabine, compared with those who received busulfan-based myeloablative conditioning (odds ratios, 1.77, 2.60, and 2.72, respectively), Megan M. Herr, PhD, of the division of cancer epidemiology and genetics at the National Cancer Institute, and the Roswell Park Comprehensive Cancer Center, Buffalo, N.Y., and colleagues reported in the Journal of the American Academy of Dermatology.

Melanoma risk also was increased in patients who experienced acute graft-versus-host disease (GVHD) with stage 2 or greater skin involvement (OR, 1.92 vs. those with no acute GVHD), chronic GVHD without skin involvement (OR, 1.91 vs. those with no chronic GVHD), or keratinocytic carcinoma (OR, 2.37), and in those who resided in areas with higher ambient ultraviolet radiation (OR for the highest vs. lowest tertile, 1.64).

The UV radiation finding was more pronounced for melanomas occurring 6 or more years after transplant (OR, 3.04 for highest vs. lowest tertile), whereas ambient UV radiation was not associated with melanomas occurring earlier (ORs, 1.37 for less than 3 years and 0.98 at 3-6 years), the investigators noted.

The findings, based on large-scale and detailed clinical data from the Center for International Blood and Marrow Transplant Research for HCT performed during 1985-2012, show that melanoma after HCT has a multifactorial etiology that includes patient-, transplant-, and posttransplant-related factors, they said, noting that the findings also underscore the importance of “prioritization of high-risk survivors for adherence to prevention and screening recommendations.”

Those recommendations call for routine skin examination and photoprotective precautions – particularly in HCT survivors at the highest risk – but studies of screening behaviors suggest that fewer than two-thirds of HCT survivors adhere to these recommendations, they said, concluding that further research on the cost-effectiveness of melanoma screening is warranted, as is investigation into whether current approaches are associated with melanoma risk.

This work was supported by the intramural research program of the National Cancer Institute, the National Institutes of Health, and the Department of Health & Human Services. The authors reported having no conflicts of interest.

SOURCE: Herr MM et al. J Am Acad Dermatol. 2019 Oct 22. doi: 10.1016/j.jaad.2019.10.034.




 

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM THE JOURNAL OF THE AMERICAN ACADEMY OF DERMATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

 

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.