Article Type
Changed
Wed, 06/14/2023 - 15:46

 

Researchers may use artificial intelligence (AI) language models such as ChatGPT to write and revise scientific manuscripts, according to a new announcement from the International Committee of Medical Journal Editors. These tools should not be listed as authors, and researchers must denote how AI-assisted technologies were used, the committee said.

These new guidelines are the latest effort for medical journals to define policies for using these large-scale language models (LLMs) in scientific publication. While these AI-assisted tools can help with tasks such as writing, analyzing data, catching mistakes, and much more, they are also prone to errors, noted Casey Greene, PhD, a professor of biomedical informatics at the University of Colorado at Denver, Aurora. It is also not totally clear how information is stored and processed in these kinds of tools, and who has access to that information, he noted.

At the same time, experts argue that these AI tools could have a positive impact on the field by limiting some of the linguistic disparities in scientific publishing as well as alleviating the burden of some monotonous or mechanical tasks that come along with manuscript writing.

What experts can agree on, though, is that the use of AI tools is here to stay. “This is going to become a common tool,” Dr. Greene said. “I don’t think there’s a way out of that at this point.”
 

A change in medical publishing

OpenAI released ChatGPT in November 2022. In its own words, ChatGPT is “a deep learning model that has been trained on a massive amount of text data to understand and generate humanlike text.” Enter a question or a prompt, and it will respond. For example, when asked how the AI tool can be used in scientific publishing, ChatGPT responded:

“ChatGPT can aid scientific publishing by generating ideas, clarifying concepts, conducting preliminary literature reviews, providing proofreading and editing suggestions, and assisting with outlining and organizing scientific papers. However, it should be used as a supportive tool alongside domain expertise and validation from trusted scientific sources.”

Just a few months after ChatGPT became available, researchers began using this tool in their own work. One individual, Som Biswas, MD, a radiologist at the University of Tennessee Health Science Center in Memphis, reportedly used ChatGPT to author 16 scientific articles in just 4 months, according to the Daily Beast. Five of these articles have been published in four different journals. Dr. Biswas declined to be interviewed for this article.

There were also reports of papers with ChatGPT as one of the listed authors, which sparked backlash. In response, JAMA, Nature, and Science all published editorials in January outlining their policies for using ChatGPT and other large language models in the scientific authoring process. Editors from the journals of the American College of Cardiology and the American College of Rheumatology also updated their policies to reflect the influence of AI authoring tools.

The consensus is that AI has no place on the author byline.

“We think that’s not appropriate, because coauthorship means that you are taking responsibility for the analysis and the generation of data that are included in a manuscript. A machine that is dictated by AI can’t take responsibility,” said Daniel Solomon, MD, MPH, a rheumatologist at Brigham and Women’s Hospital, Boston, and the editor in chief of the ACR journal Arthritis & Rheumatology.
 

 

 

Issues with AI

One of the big concerns around using AI in writing is that it can generate text that seems plausible but is untrue or not supported by data. For example, Dr. Greene and colleague Milton Pividori, PhD, also of the University of Colorado, were writing a journal article about new software they developed that uses a large language model to revise scientific manuscripts.

“We used the same software to revise that article and at one point, it added a line that noted that the large language model had been fine-tuned on a data set of manuscripts from within the same field. This makes a lot of sense, and is absolutely something you could do, but was not something that we did,” Dr. Greene said. “Without a really careful review of the content, it becomes possible to invent things that were not actually done.”

In another case, ChatGPT falsely stated that a prominent law professor had been accused of sexual assault, citing a Washington Post article that did not exist.

“We live in a society where we are extremely concerned about fake news,” Dr. Pividori added, “and [these kinds of errors] could certainly exacerbate that in the scientific community, which is very concerning because science informs public policy.”

Another issue is the lack of transparency around how large language models like ChatGPT process and store data used to make queries.

“We have no idea how they are recording all the prompts and things that we input into ChatGPT and their systems,” Dr. Pividori said.

OpenAI recently addressed some privacy concerns by allowing users to turn off their chat history with the AI chatbot, so conversations cannot be used to train or improve the company’s models. But Dr. Greene noted that the terms of service “still remain pretty nebulous.”

Dr. Solomon is also concerned with researchers using these AI tools in authoring without knowing how they work. “The thing we are really concerned about is that fact that [LLMs] are a bit of a black box – people don’t really understand the methodologies,” he said.
 

A positive tool?

But despite these concerns, many think that these types of AI-assisted tools could have a positive impact on medical publishing, particularly for researchers for whom English is not their first language, noted Catherine Gao, MD, a pulmonary and critical care instructor at Northwestern University, Chicago. She recently led research comparing scientific abstracts written by ChatGPT and real abstracts and discovered that reviewers found it “surprisingly difficult” to differentiate the two.

“The majority of research is published in English,” she said in an email. “Responsible use of LLMs can potentially reduce the burden of writing for busy scientists and improve equity for those who are not native English speakers.”

Dr. Pividori agreed, adding that as a non-native English speaker, he spends much more time working on the structure and grammar of sentences when authoring a manuscript, compared with people who speak English as a first language. He noted that these tools can also be used to automate some of the more monotonous tasks that come along with writing manuscripts and allow researchers to focus on the more creative aspects.

In the future, “I want to focus more on the things that only a human can do and let these tools do all the rest of it,” he said.
 

 

 

New rules

But despite how individual researchers feel about LLMs, they agree that these AI tools are here to stay.

“I think that we should anticipate that they will become part of the medical research establishment over time, when we figure out how to use them appropriately,” Dr. Solomon said.

While the debate of how to best use AI in medical publications will continue, journal editors agree that all authors of a manuscript are solely responsible for content in articles that used AI-assisted technology.

“Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased,” the ICMJE guidelines state. “Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI.” This includes appropriate attribution of all cited materials.

The committee also recommends that authors write in both the cover letter and submitted work how AI was used in the manuscript writing process. Recently updated guidelines from the World Association of Medical Editors recommend that all prompts used to generate new text or analytical work should be provided in submitted work. Dr. Greene also noted that if authors used an AI tool to revise their work, they can include a version of the manuscript untouched by LLMs.

It is similar to a preprint, he said, but rather than publishing a version of a paper prior to peer review, someone is showing a version of a manuscript before it was reviewed and revised by AI. “This type of practice could be a path that lets us benefit from these models,” he said, “without having the drawbacks that many are concerned about.”

Dr. Solomon has financial relationships with AbbVie, Amgen, Janssen, CorEvitas, and Moderna. Both Dr. Greene and Dr. Pividori are inventors in the U.S. Provisional Patent Application No. 63/486,706 that the University of Colorado has filed for the “Publishing Infrastructure For AI-Assisted Academic Authoring” invention with the U.S. Patent and Trademark Office. Dr. Greene and Dr. Pividori also received a grant from the Alfred P. Sloan Foundation to improve their AI-based manuscript revision tool. Dr. Gao reported no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

 

Researchers may use artificial intelligence (AI) language models such as ChatGPT to write and revise scientific manuscripts, according to a new announcement from the International Committee of Medical Journal Editors. These tools should not be listed as authors, and researchers must denote how AI-assisted technologies were used, the committee said.

These new guidelines are the latest effort for medical journals to define policies for using these large-scale language models (LLMs) in scientific publication. While these AI-assisted tools can help with tasks such as writing, analyzing data, catching mistakes, and much more, they are also prone to errors, noted Casey Greene, PhD, a professor of biomedical informatics at the University of Colorado at Denver, Aurora. It is also not totally clear how information is stored and processed in these kinds of tools, and who has access to that information, he noted.

At the same time, experts argue that these AI tools could have a positive impact on the field by limiting some of the linguistic disparities in scientific publishing as well as alleviating the burden of some monotonous or mechanical tasks that come along with manuscript writing.

What experts can agree on, though, is that the use of AI tools is here to stay. “This is going to become a common tool,” Dr. Greene said. “I don’t think there’s a way out of that at this point.”
 

A change in medical publishing

OpenAI released ChatGPT in November 2022. In its own words, ChatGPT is “a deep learning model that has been trained on a massive amount of text data to understand and generate humanlike text.” Enter a question or a prompt, and it will respond. For example, when asked how the AI tool can be used in scientific publishing, ChatGPT responded:

“ChatGPT can aid scientific publishing by generating ideas, clarifying concepts, conducting preliminary literature reviews, providing proofreading and editing suggestions, and assisting with outlining and organizing scientific papers. However, it should be used as a supportive tool alongside domain expertise and validation from trusted scientific sources.”

Just a few months after ChatGPT became available, researchers began using this tool in their own work. One individual, Som Biswas, MD, a radiologist at the University of Tennessee Health Science Center in Memphis, reportedly used ChatGPT to author 16 scientific articles in just 4 months, according to the Daily Beast. Five of these articles have been published in four different journals. Dr. Biswas declined to be interviewed for this article.

There were also reports of papers with ChatGPT as one of the listed authors, which sparked backlash. In response, JAMA, Nature, and Science all published editorials in January outlining their policies for using ChatGPT and other large language models in the scientific authoring process. Editors from the journals of the American College of Cardiology and the American College of Rheumatology also updated their policies to reflect the influence of AI authoring tools.

The consensus is that AI has no place on the author byline.

“We think that’s not appropriate, because coauthorship means that you are taking responsibility for the analysis and the generation of data that are included in a manuscript. A machine that is dictated by AI can’t take responsibility,” said Daniel Solomon, MD, MPH, a rheumatologist at Brigham and Women’s Hospital, Boston, and the editor in chief of the ACR journal Arthritis & Rheumatology.
 

 

 

Issues with AI

One of the big concerns around using AI in writing is that it can generate text that seems plausible but is untrue or not supported by data. For example, Dr. Greene and colleague Milton Pividori, PhD, also of the University of Colorado, were writing a journal article about new software they developed that uses a large language model to revise scientific manuscripts.

“We used the same software to revise that article and at one point, it added a line that noted that the large language model had been fine-tuned on a data set of manuscripts from within the same field. This makes a lot of sense, and is absolutely something you could do, but was not something that we did,” Dr. Greene said. “Without a really careful review of the content, it becomes possible to invent things that were not actually done.”

In another case, ChatGPT falsely stated that a prominent law professor had been accused of sexual assault, citing a Washington Post article that did not exist.

“We live in a society where we are extremely concerned about fake news,” Dr. Pividori added, “and [these kinds of errors] could certainly exacerbate that in the scientific community, which is very concerning because science informs public policy.”

Another issue is the lack of transparency around how large language models like ChatGPT process and store data used to make queries.

“We have no idea how they are recording all the prompts and things that we input into ChatGPT and their systems,” Dr. Pividori said.

OpenAI recently addressed some privacy concerns by allowing users to turn off their chat history with the AI chatbot, so conversations cannot be used to train or improve the company’s models. But Dr. Greene noted that the terms of service “still remain pretty nebulous.”

Dr. Solomon is also concerned with researchers using these AI tools in authoring without knowing how they work. “The thing we are really concerned about is that fact that [LLMs] are a bit of a black box – people don’t really understand the methodologies,” he said.
 

A positive tool?

But despite these concerns, many think that these types of AI-assisted tools could have a positive impact on medical publishing, particularly for researchers for whom English is not their first language, noted Catherine Gao, MD, a pulmonary and critical care instructor at Northwestern University, Chicago. She recently led research comparing scientific abstracts written by ChatGPT and real abstracts and discovered that reviewers found it “surprisingly difficult” to differentiate the two.

“The majority of research is published in English,” she said in an email. “Responsible use of LLMs can potentially reduce the burden of writing for busy scientists and improve equity for those who are not native English speakers.”

Dr. Pividori agreed, adding that as a non-native English speaker, he spends much more time working on the structure and grammar of sentences when authoring a manuscript, compared with people who speak English as a first language. He noted that these tools can also be used to automate some of the more monotonous tasks that come along with writing manuscripts and allow researchers to focus on the more creative aspects.

In the future, “I want to focus more on the things that only a human can do and let these tools do all the rest of it,” he said.
 

 

 

New rules

But despite how individual researchers feel about LLMs, they agree that these AI tools are here to stay.

“I think that we should anticipate that they will become part of the medical research establishment over time, when we figure out how to use them appropriately,” Dr. Solomon said.

While the debate of how to best use AI in medical publications will continue, journal editors agree that all authors of a manuscript are solely responsible for content in articles that used AI-assisted technology.

“Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased,” the ICMJE guidelines state. “Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI.” This includes appropriate attribution of all cited materials.

The committee also recommends that authors write in both the cover letter and submitted work how AI was used in the manuscript writing process. Recently updated guidelines from the World Association of Medical Editors recommend that all prompts used to generate new text or analytical work should be provided in submitted work. Dr. Greene also noted that if authors used an AI tool to revise their work, they can include a version of the manuscript untouched by LLMs.

It is similar to a preprint, he said, but rather than publishing a version of a paper prior to peer review, someone is showing a version of a manuscript before it was reviewed and revised by AI. “This type of practice could be a path that lets us benefit from these models,” he said, “without having the drawbacks that many are concerned about.”

Dr. Solomon has financial relationships with AbbVie, Amgen, Janssen, CorEvitas, and Moderna. Both Dr. Greene and Dr. Pividori are inventors in the U.S. Provisional Patent Application No. 63/486,706 that the University of Colorado has filed for the “Publishing Infrastructure For AI-Assisted Academic Authoring” invention with the U.S. Patent and Trademark Office. Dr. Greene and Dr. Pividori also received a grant from the Alfred P. Sloan Foundation to improve their AI-based manuscript revision tool. Dr. Gao reported no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

 

Researchers may use artificial intelligence (AI) language models such as ChatGPT to write and revise scientific manuscripts, according to a new announcement from the International Committee of Medical Journal Editors. These tools should not be listed as authors, and researchers must denote how AI-assisted technologies were used, the committee said.

These new guidelines are the latest effort for medical journals to define policies for using these large-scale language models (LLMs) in scientific publication. While these AI-assisted tools can help with tasks such as writing, analyzing data, catching mistakes, and much more, they are also prone to errors, noted Casey Greene, PhD, a professor of biomedical informatics at the University of Colorado at Denver, Aurora. It is also not totally clear how information is stored and processed in these kinds of tools, and who has access to that information, he noted.

At the same time, experts argue that these AI tools could have a positive impact on the field by limiting some of the linguistic disparities in scientific publishing as well as alleviating the burden of some monotonous or mechanical tasks that come along with manuscript writing.

What experts can agree on, though, is that the use of AI tools is here to stay. “This is going to become a common tool,” Dr. Greene said. “I don’t think there’s a way out of that at this point.”
 

A change in medical publishing

OpenAI released ChatGPT in November 2022. In its own words, ChatGPT is “a deep learning model that has been trained on a massive amount of text data to understand and generate humanlike text.” Enter a question or a prompt, and it will respond. For example, when asked how the AI tool can be used in scientific publishing, ChatGPT responded:

“ChatGPT can aid scientific publishing by generating ideas, clarifying concepts, conducting preliminary literature reviews, providing proofreading and editing suggestions, and assisting with outlining and organizing scientific papers. However, it should be used as a supportive tool alongside domain expertise and validation from trusted scientific sources.”

Just a few months after ChatGPT became available, researchers began using this tool in their own work. One individual, Som Biswas, MD, a radiologist at the University of Tennessee Health Science Center in Memphis, reportedly used ChatGPT to author 16 scientific articles in just 4 months, according to the Daily Beast. Five of these articles have been published in four different journals. Dr. Biswas declined to be interviewed for this article.

There were also reports of papers with ChatGPT as one of the listed authors, which sparked backlash. In response, JAMA, Nature, and Science all published editorials in January outlining their policies for using ChatGPT and other large language models in the scientific authoring process. Editors from the journals of the American College of Cardiology and the American College of Rheumatology also updated their policies to reflect the influence of AI authoring tools.

The consensus is that AI has no place on the author byline.

“We think that’s not appropriate, because coauthorship means that you are taking responsibility for the analysis and the generation of data that are included in a manuscript. A machine that is dictated by AI can’t take responsibility,” said Daniel Solomon, MD, MPH, a rheumatologist at Brigham and Women’s Hospital, Boston, and the editor in chief of the ACR journal Arthritis & Rheumatology.
 

 

 

Issues with AI

One of the big concerns around using AI in writing is that it can generate text that seems plausible but is untrue or not supported by data. For example, Dr. Greene and colleague Milton Pividori, PhD, also of the University of Colorado, were writing a journal article about new software they developed that uses a large language model to revise scientific manuscripts.

“We used the same software to revise that article and at one point, it added a line that noted that the large language model had been fine-tuned on a data set of manuscripts from within the same field. This makes a lot of sense, and is absolutely something you could do, but was not something that we did,” Dr. Greene said. “Without a really careful review of the content, it becomes possible to invent things that were not actually done.”

In another case, ChatGPT falsely stated that a prominent law professor had been accused of sexual assault, citing a Washington Post article that did not exist.

“We live in a society where we are extremely concerned about fake news,” Dr. Pividori added, “and [these kinds of errors] could certainly exacerbate that in the scientific community, which is very concerning because science informs public policy.”

Another issue is the lack of transparency around how large language models like ChatGPT process and store data used to make queries.

“We have no idea how they are recording all the prompts and things that we input into ChatGPT and their systems,” Dr. Pividori said.

OpenAI recently addressed some privacy concerns by allowing users to turn off their chat history with the AI chatbot, so conversations cannot be used to train or improve the company’s models. But Dr. Greene noted that the terms of service “still remain pretty nebulous.”

Dr. Solomon is also concerned with researchers using these AI tools in authoring without knowing how they work. “The thing we are really concerned about is that fact that [LLMs] are a bit of a black box – people don’t really understand the methodologies,” he said.
 

A positive tool?

But despite these concerns, many think that these types of AI-assisted tools could have a positive impact on medical publishing, particularly for researchers for whom English is not their first language, noted Catherine Gao, MD, a pulmonary and critical care instructor at Northwestern University, Chicago. She recently led research comparing scientific abstracts written by ChatGPT and real abstracts and discovered that reviewers found it “surprisingly difficult” to differentiate the two.

“The majority of research is published in English,” she said in an email. “Responsible use of LLMs can potentially reduce the burden of writing for busy scientists and improve equity for those who are not native English speakers.”

Dr. Pividori agreed, adding that as a non-native English speaker, he spends much more time working on the structure and grammar of sentences when authoring a manuscript, compared with people who speak English as a first language. He noted that these tools can also be used to automate some of the more monotonous tasks that come along with writing manuscripts and allow researchers to focus on the more creative aspects.

In the future, “I want to focus more on the things that only a human can do and let these tools do all the rest of it,” he said.
 

 

 

New rules

But despite how individual researchers feel about LLMs, they agree that these AI tools are here to stay.

“I think that we should anticipate that they will become part of the medical research establishment over time, when we figure out how to use them appropriately,” Dr. Solomon said.

While the debate of how to best use AI in medical publications will continue, journal editors agree that all authors of a manuscript are solely responsible for content in articles that used AI-assisted technology.

“Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased,” the ICMJE guidelines state. “Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI.” This includes appropriate attribution of all cited materials.

The committee also recommends that authors write in both the cover letter and submitted work how AI was used in the manuscript writing process. Recently updated guidelines from the World Association of Medical Editors recommend that all prompts used to generate new text or analytical work should be provided in submitted work. Dr. Greene also noted that if authors used an AI tool to revise their work, they can include a version of the manuscript untouched by LLMs.

It is similar to a preprint, he said, but rather than publishing a version of a paper prior to peer review, someone is showing a version of a manuscript before it was reviewed and revised by AI. “This type of practice could be a path that lets us benefit from these models,” he said, “without having the drawbacks that many are concerned about.”

Dr. Solomon has financial relationships with AbbVie, Amgen, Janssen, CorEvitas, and Moderna. Both Dr. Greene and Dr. Pividori are inventors in the U.S. Provisional Patent Application No. 63/486,706 that the University of Colorado has filed for the “Publishing Infrastructure For AI-Assisted Academic Authoring” invention with the U.S. Patent and Trademark Office. Dr. Greene and Dr. Pividori also received a grant from the Alfred P. Sloan Foundation to improve their AI-based manuscript revision tool. Dr. Gao reported no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article