Large Language Models (LLMs) such as those used in chatbots have an alarmingtendency to hallucinate.

That is, to generate false content that they present as accurate.

LLMs are currently treated as knowledge sources and generate information in response to questions or prompts.

AI hallucinations pose ‘direct threat’ to science, Oxford study warns

But the data theyre trained on isnt necessarily factually correct.

It’s free, every week, in your inbox.

LLMs will undoubtedly assist with scientific workflows, according to the Oxford professors.

Article image

Story byIoanna Lykiardopoulou

Ioanna is a writer at TNW.

With a background in the humanities, she has a soft spot for social impact-enabling technologies.

Also tagged with

Ioanna Lykiardopoulou