Two Stanford heavyweights have weighed in on the fieryAIsentience debate and the duo is firmly in the BS corner.
The wrangle recently rose to a crescendo over arguments aboutGoogles LaMDA system.
DeveloperBlake Lemoine sparked the controversy.

The 41-year-oldtold The Washington Postthat his conversations with the AI convinced him that it had a sentient mind.
It’s free, every week, in your inbox.
I know a person when I talk to it, he said.

It doesnt matter whether they have a brain made of meat in their head.
Or if they have a billion lines of code.
I talk to them.
In July, the company put Lemoine on leave for publishing confidential information.
Google might call this sharing proprietary property.
AI experts, however, havelargely dismissedLemoines argument.
The Stanford duo this week shared further criticisms withThe Stanford Daily.
It is a software program designed to produce sentences in response to sentence prompts.
Yoav Shoham, the former director of the Stanford AI Lab,agreed that LaMDA isnt sentient.
He described The Washington Post article as pure clickbait.
The hype may generate clicks and market products, but researchers fear its distracting us from more pressing issues.
LLMs are causing particular alarm.
While the models have become adept at generating humanlike text, excitement about their intelligence can mask their shortcomings.
Research shows the systems can haveenormous carbon footprints,amplify discriminatory language, andpose real-life dangers.
But focusing on that prospect is making us overlook the real-life consequences that are already unfolding.
Story byThomas Macaulay
Thomas is the managing editor of TNW.
He leads our coverage of European tech and oversees our talented team of writers.
Away from work, he e(show all)Thomas is the managing editor of TNW.
He leads our coverage of European tech and oversees our talented team of writers.
Away from work, he enjoys playing chess (badly) and the guitar (even worse).