Posted in | News | Machining Robotics

Generating Seemingly Authentic Scientific Articles Using AI

A new study reported by Dr. Martin Májovský and collaborators has shown that artificial intelligence (AI) language models like ChatGPT (Chat Generative Pre-trained Transformer) have the potential to generate false scientific articles that appear exceptionally authentic.

Generating Seemingly Authentic Scientific Articles Using AI

AI-generated image, in response to the request “pandoras box opened with a physician standing next to it. Oil painting Henry Matisse style”, (Generator: DALL-E2/OpenAI, March 9, 2023, Requestor: Martin Májovský). Image Credit: Created with DALL-E2, an AI system by OpenAI; Copyright: N/A (AI-generated image);

The study was reported in the Journal of Medical Internet Research on May 31st, 2023.

This breakthrough increases crucial concerns regarding the integrity of scientific research and the reliability of published papers.

Scientists from Charles University, Czech Republic, aimed to examine the abilities of current AI language models in making high-quality fraudulent medical articles.

The famous AI chatbot ChatGPT was utilized by the team, which runs on the GPT-3 language model that has been developed by OpenAI, to produce an entirely fabricated scientific article in the neurosurgery field. Questions and prompts were polished as ChatGPT-generated responses, thereby enabling the quality of the output to be iteratively enhanced.

The outcomes of this proof-of-concept study were remarkable—the AI language model was successful in producing a fraudulent article that closely mirrored a genuine scientific study in regard to word usage, sentence structure, and overall composition.

The article consisted of standard sections, including an abstract, introduction, results, methods, discussion, and also tables and other data. Astonishingly, the complete process of article creation took just 1 hour without any unique training of the human user.

While the AI-generated article appeared to be an advanced and flawless one on the subject, at closer examination, expert readers were able to identify semantic errors and inaccuracies, especially in the references—a few references were incorrect, while others were nonexistent.

This highlights the need for increased vigilance and improved detection methods to fight the possible misuse of AI in scientific research.

This outcome of the study stresses the significance of coming up with ethical guidelines and best practices for the use of AI language models in real scientific writing and research. Models like ChatGPT can improve the efficiency and precision of result analysis, document creation, and language editing.

By making use of such tools with care and responsibility, scientists can exploit their power while reducing the risk of misuse or abuse.

In a commentary on Dr. Májovský’s article, Dr. Pedro Ballester discusses the need to prioritize the visibility and reproducibility of scientific works, as they act as necessary safeguards against the flourishing of fraudulent research.

As there is continuous progress in AI, it will be vital for the scientific community to verify the precision and authenticity of content that has been produced by such tools and to carry out processes for detecting and avoiding fraud and misconduct.

While both articles agree to the fact that there needs to be a better way to confirm the precision and authenticity of AI-generated content, how this could be achieved is yet to be understood clearly.

We should at least declare the extent to which AI has assisted the writing and analysis of a paper,” indicates Dr Ballester as a starting point.

One more possible solution suggested by Majovsky and collaborators is the compulsory submitting of data sets.

Journal Reference

Májovský, M., et al. (2023) Artificial Intelligence Can Generate Fraudulent but Authentic-Looking Scientific Medical Articles: Pandora’s Box Has Been Opened. Journal of Medical Internet Research. doi.org/10.2196/46924.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.