Jump to content

User:Wdan14/Hallucination (artificial intelligence)

From Wikipedia, the free encyclopedia

Scientific Research[edit]

Artificial Intelligence models may also be causing some problems in the world of academic and scientific research due to their hallucinations. Specifically, models like ChatGPT have been recorded in multiple cases to cite sources for information that are either not correct or do not exist. A study conducted in the Cureus Journal of Medical Science showed that out of 178 total references cited by ChatGPT, 69 returned an incorrect or nonexistent DOI. An addition 28 had no known DOI nor could be located in a Google search.[1]

Another instance of this occurring was documented by Dr. Jerome Goddard from Mississippi State University. During a lab experiment, ChatGPT had provided him and his research team with questionable information about ticks. Unsure about the validity of the response, they inquired about the source that the information had been gathered from. Upon looking at the source, it was apparent that not only had the DOI been hallucinated, but the names of the authors as well. Some of the authors were contacted and confirmed that they had no knowledge of the papers existence whatsoever.[2] Goddard says that, "in [ChatGPT's] current state of development, physicians and biomedical researchers should NOT ask ChatGPT for sources, references, or citations on a particular topic. Or, if they do, all such references should be carefully vetted for accuracy."[2] Goddard expresses that the use of these language models is not ready for fields of academic research and that their use should be handled carefully.

On top of providing incorrect or missing reference material, ChatGPT also has issues with hallucinating the contents of some reference material. A study that analyzed a total of 115 references provided by ChatGPT documented that 47% of them were fabricated. Another 46% cited real references but extracted incorrect information from them. Only the remaining 7% of references were cited correctly and provided accurate information. ChatGPT has also been observed to "double-down" on a lot of the incorrect information. When you ask ChatGPT about a mistake that may have been hallucinated, sometimes it will try to correct itself but other times it will claim the response is correct and provide even more misleading information.[3]

These hallucinated articles generated by language models also pose an issue because to a human being it can be very difficult to tell if an article is generated by an AI. To show this, a group of researchers at the Northwestern University of Chicago generated 50 abstract research reports based on existing reports and analyzed their originality. Plagiarism detectors gave the generated articles an originality score of 100%, meaning that the information presented appears to be completely original. Other software designed to detect AI generated text was only able to correctly identify these generated articles with an accuracy of 66%. Research scientists had a similar rate of human error, identifying these abstracts at a rate of 68%.[4] From this information, the authors of this study concluded, "[t]he ethical and acceptable boundaries of ChatGPT’s use in scientific writing remain unclear, although some publishers are beginning to lay down policies."[5] Because of AI's ability to fabricate research undetected, the use of AI in the field of research will make determining the originality of research more difficult and require new policies regulating its use in the future.

Given the ability of AI generated language to pass as real scientific research in some cases, AI Hallucinations present problems for the application of language models in the Academic and Scientific fields of research due to their ability to be undetectable when presented to real researchers. The high likely-hood of returning non-existent reference material and incorrect information may require limitations to be put in place regarding these language models. Some say that rather than hallucinations, these events are more akin to "fabrications" and "falsifications" and that the use of these language models presents a risk integrity of the field as a whole.[6]

References[edit]

  1. ^ Athaluri, Sai Anirudh; Manthena, Sandeep Varma; Kesapragada, V S R Krishna Manoj; Yarlagadda, Vineel; Dave, Tirth; Duddumpudi, Rama Tulasi Siri (2023-04-11). "Exploring the Boundaries of Reality: Investigating the Phenomenon of Artificial Intelligence Hallucination in Scientific Writing Through ChatGPT References". Cureus. doi:10.7759/cureus.37432. ISSN 2168-8184. PMC 10173677. PMID 37182055.{{cite journal}}: CS1 maint: PMC format (link) CS1 maint: unflagged free DOI (link)
  2. ^ a b Goddard, Jerome (2023-6-25). "Hallucinations in ChatGPT: A Cautionary Tale for Biomedical Researchers". The American Journal of Medicine. 136 (11): 1059–1060. doi:10.1016/j.amjmed.2023.06.012. ISSN 0002-9343. {{cite journal}}: Check date values in: |date= (help)
  3. ^ Bhattacharyya, Mehul; Miller, Valerie M.; Bhattacharyya, Debjani; Miller, Larry E.; Bhattacharyya, Mehul; Miller, Valerie; Bhattacharyya, Debjani; Miller, Larry E. (2023-05-19). "High Rates of Fabricated and Inaccurate References in ChatGPT-Generated Medical Content". Cureus. 15 (5). doi:10.7759/cureus.39238. ISSN 2168-8184. PMC 10277170. PMID 37337480.{{cite journal}}: CS1 maint: PMC format (link) CS1 maint: unflagged free DOI (link)
  4. ^ Else, Holly (2023-01-12). "Abstracts written by ChatGPT fool scientists". Nature. 613 (7944): 423–423. doi:10.1038/d41586-023-00056-7.
  5. ^ Gao, Catherine A.; Howard, Frederick M.; Markov, Nikolay S.; Dyer, Emma C.; Ramesh, Siddhi; Luo, Yuan; Pearson, Alexander T. (2023-04-26). "Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors and blinded human reviewers". npj Digital Medicine. 6 (1): 1–5. doi:10.1038/s41746-023-00819-6. ISSN 2398-6352. PMC 10133283. PMID 37100871.{{cite journal}}: CS1 maint: PMC format (link)
  6. ^ Emsley, Robin (2023-08-19). "ChatGPT: these are not hallucinations – they're fabrications and falsifications". Schizophrenia. 9 (1): 1–2. doi:10.1038/s41537-023-00379-4. ISSN 2754-6993. PMC 10439949. PMID 37598184.{{cite journal}}: CS1 maint: PMC format (link)