The airline tried to claim the bot was responsible for its own actions.
Its clear companies are liable for AI models, even when they make mistakes beyond our control.
Luckily, its not a very widespread problem.

It only happens between 2% to maybe 10% of the time at the high end.
But still, it can be very dangerous in abusinessenvironment.
It’s free, every week, in your inbox.

But most AI experts dislike this term.
The terminology, and whats behind it, i.e.
our misunderstanding of how these occurrences happen, can potentially lead to pitfalls with ripple effects into the future.

As former VP of Product Intelligence Engineering at Yahoo!
But thats not whats happening behind the lines of code that puts these models into operation.
Its very common that we as humans fall into this pop in of trap.
For our minds to comprehend such a complex topic, we use shortcuts.
I think the media played a big role in that because its an attractive term that creates a buzz.
So they latched onto it and its become the standard way we refer to it now, Awadallah says.
Its really attributing more to the AI than it is.
Its not thinking in the same way were thinking.
If he had to give this occurrence a name, he would call it a confabulation.
[AI models are] highly incentivised to answer any question.
It doesnt want to tell you, I dont know, says Awadallah.
For example, cultural context can result in different perspectives and responses to the same query.
Although this is a much larger amount of memory than a human can retain, its not unlimited.
Can AI misinformation be solved?
Theres been a lot of talk about whether or not confabulations can be solved.
Awadallah and his team at Vectara are developing a method to combat confabulations in narrow domain knowledge systems.
This is known as Retrieval Augmented Generation (RAG).
Some researchers recently published a promising paper on theuse of semantic entropy to detect AI misinformation.
Could limiting their responses also limit our ability to use them for creative tasks?