Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the gd-system-plugin domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wordpress-seo domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the kleo domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114
ChatGPT Fools Scientist with Fake "Abstracts" For Research Papers - Chum.Alum.World

For those in the Scientific Papers Field, the abstract is basically the blueprint for your article to be accepted by peers in an academic journal.  Scientists depend on abstracts to differentiate their research as being novel in the discovery phase which then allows ownership and hopefully additional funding. Recently, ChatGPT was deployed to create abstracts from real medical papers published in top research journals, according to the latest research. With GPT4 on the horizon, this could very well cause a monumental disruption in one of the core facilitators of scientific discovery as it pertains to author autonomy. Are we on the cusp of innovation, in general, being democratic or more open source? The future will only tell.  2023 will be a pivotal point in how research is tethered to the researcher.

  • Human reviewers could only detect fake abstracts 68% of the time
  • If used unscrupulously, ChatGPT could undermine scientific research
  • AI language models also could be used for good in scientific writing

Source: Northwestern University Now

A team of researchers led by Northwestern University used the text-generation tool, developed by OpenAI, to produce 50 abstracts based on the title of a real scientific paper in the style of five different medical journals.

Reviewers were able to detect 68 percent of fake abstracts generated by AI and 86 percent of original abstracts from real papers. In other words, they were successfully tricked into thinking 32 per cent of the AI-written abstracts were real, and 14 percent of the real abstracts were fake.

“Our reviewers knew that some of the abstracts they were being given were fake, so they were very suspicious,” she said in a statement. 

Catherine Gao, first author of the study and a physician and scientist specialising in pulmonology at Northwestern University, said it shows ChatGPT can be pretty convincing. “Our reviewers knew that some of the abstracts they were being given were fake, so they were very suspicious,” she said in a statement.

Source: The Register

“Our reviewers commented that it was surprisingly difficult to differentiate between the real and fake abstracts,” Gao said. “The ChatGPT-generated abstracts were very convincing…it even knows how large the patient cohort should be when it invents numbers.” A fake abstract about hypertension, for example, described a study with tens of thousands of participants, whilst one on monkeypox included a smaller number of patients.

“There have been groups who have started using it to help writing, though, and some have included it as a listed co-author. I think that it may be okay to use ChatGPT for writing help, but when this is done, it is important to include a clear disclosure that ChatGPT helped write sections of a manuscript. Depending on what the scientific community consensus ends up being, we may or may not use LLMs to help write papers in the future.” ®

Source: Scientists tricked into believing fake abstracts written by ChatGPT were real. Study warns tool could be used to create fake research papers for paper mills

ChumAlumWorld

KOLOR // Artificial Industrialist

KOLOR Looks Forward to Building With You!

Sending

Log in with your credentials

Forgot your details?