A Google AI engineer recently stunned the world byannouncingthat one of the companys chatbots had become sentient.
He was subsequently placed on paid administrative leave for his outburst.
His name is Blake Lemoine and he sure seems like the right person to talk about machines with souls.

Not only is he a professional AI developer at Google, but hes alsoa Christian priest.
Hes like a Reeses Peanut Butter Cup of science and religion.
The only problem is that the whole concept is ridiculous and dangerous.
There are thousands ofAIexperts debating sentience right now, and they all seem to be talking past each other.
40% off TNW Conference!
He claims he was doing routine maintenance on a chatbot when hediscoveredthat it had become sentient.
Weve seen this movie a hundred times.
Hes the chosen one.
Lemoines essential argument is that he cant really demonstrate how the AI is sentient,he just feels it.
And the only reason he said anything at all is because he had to.
The big problem comes in when you realize tht LaMDBA isnt acting oddly or generating text that seems strange.
Its doing exactly what it was designed to do.
So how do you debate something with someone whose only contribution to the argument is their faith?
Heres the scary part: Lemoines argument feels like just as good as anyone elses.
I dont mean to say its asworthyas anyone elses.
Im saying that nobodys thoughts on the matter seem to hold any real weight anymore.
Lemoines assertions, and the subsequent attention theyve garnered, have reframed the conversation around sentience.
It all sounds preposterous and silly, but what happens if Lemoine gains followers?
These models are trained on databases that contain portions of the entire internet.
That means they could have near endless amounts of private information.
It also means that these models can probably argue politics better than the average social media denizen.
But theresempirical evidencethat the 2016 US presidential elections were swayed by chatbots armed with nothing more than memes.