Did you knowNeural is taking the stage this fall?
Up front:Im not crapping on GPT-3.
Its arguably the worlds most advanced text generator.

But the hyperbole over its abilities has reached a dangerous fever pitch.
40% off TNW Conference!
Heres what really happened.
They allowed others to jump into the chatbot on their website where people could train their own versions.
I dont have an issue with people doing whatever the hell they want with a chatbot.
If talking to a computer makes you feel better, thats great for you.
Theyre equally effective in every respect when it comes to communing with the dead.
The public is confused enough about what AI can and cant do.
Rohrer seems to believe that OpenAI is robbing humanity of an important experience.
They, apparently, believe GPT-3 is bordering on sentient, if not already there.
I was a hard-nosed AI skeptic, he told us.
Last year, I thought Id never have a conversation with a sentient machine.
If were not here right now, were as close as weve ever been.
Its spine-tingling stuff, I get goosebumps when I talk to Samantha.
Very few people have had that experience, and its one humanity deserves to have.
Its really sad that the rest of us wont get to know that.
Really?Its really sad that humanity wont get to experience being duped by prestidigitation firsthand?
Im not trying to be mean here, but claiming GPT-3 is bordering on sentience is beyond the pale.
Lets be crystal clear here.
Theres nothing mysterious about GPT-3.
Theres nothing magical or inexplicable about what it does.
It doesnt think, it doesnt spell, it doesnt care.
If you read it and like it, it comes off as communicating with you.
If you read the response and think its stupid, the machine just looks like a dumb machine.
GPT-3 doesnt know what a bucket is, or a piece of paper, or you, or anything.
If you dont believe a chicken can do algebra, you shouldnt believe GPT-3 can actually have a conversation.
Again, its prestidigitation.
After all, that seems to be Rohrers biggest complaint.
People are consenting adults that can choose to talk to an AI for their own purposes.
Its a hyper-moral stance.
Call me a dissenting opinion, because I vehemently disagree with Rohrer.
Its incredibly dangerous for people whom the general public sees as experts to continuously peddle nonsensical ideas about AI.
He wants people to believe his cars really can drive themselves.
After all, would a billionaire trust his life with a machine what was dangerous?
The answer is yes.
People keep dyingbecause they believe their cars are capable of technological feats they are not.
The public doesnt believe journalists or academics.
It makes it easier forsnake oil companies such as PredPol or Faceptionto peddle their bullshit.
So, yes, there is a definite harm in peddling nonsense and acting as if its important work.
Theres nothing special about a GPT-3 chatbot called Samantha.