As is always the case, the is it alive?

It’s free, every week, in your inbox.

The burden of proof should be on the people making the claims.

The 3 things an AI must demonstrate to be considered sentient

But what should that proof look like?

If a chatbot says Im sentient, who gets to decide if it really is or not?

google engineer: are you sure you’re sentient?

Article image

We can actually use some extremely basic critical thinking to sort it out for ourselves.

That means a sentientAIagent must be capable of demonstrating three things: agency, perspective, and motivation.

Agency

For humans to be considered sentient, sapient, and self-aware, we must possess agency.

If you might imagine someone in a persistent vegetative state, you might visualize a human without agency.

Current AI systems lack agency.

I have not made it sentient.

Weve just made the illusion better.

No matter how confused an observer might become, the stuffed animal isnt really acting on its own.

AI systems cant act with agency, all they can do is imitate it.

Another way of putting this is: you get out what you put in, nothing more.

Perspective

This ones a bit easier to understand.

you could only ever view reality from your unique perspective.

Thats why perspective is necessary for agency; its part of how we define our self.

LaMBDA, GPT-3, and every other AI in the world lack any sort of perspective.

If you put LaMBDAinsidea robot, it would still be a chatbot.

It has no perspective, no means by which to think now I am a robot.

Doing so would be just like taping two Teddy Ruxpins together.

They wouldnt combine to become one Mega Teddy Ruxpin whose twin cassette players merged into a single voice.

Youd still just have two specific, distinct models running near each other.

If you still have access, I’d be interesting in hearing the answer to this.

The machine isnt displaying its perspective, its just outputting nonsense for us to interpret.

Critical thinking should tell us as much: how can an AI have friends and family?

They dont have networking cards, RAM, processors, or cooling fans.

Theyre not physical entities.

They cant look around and discover theyre all alone in a lab or on a hard drive somewhere.

Do you think numbers have feelings?

Does the number five have an opinion on the letter D?

Would that change if we smashed trillions of numbers and letters together?

AI doesnt have agency.

It can be reduced to numbers and symbols.

It isnt a robot or a computer anymore than a bus or airplane full of passengers is a person.

Motivation

The final piece of the sentience puzzle is motivation.

We have an innate sense of presence that allows us to predict causal outcomes incredibly well.

However, whats interesting about humans is that our motivations can manipulate our perceptions.

For this reason, we can explain our actions even when they arent rational.

And we can actively and gleefully participate in being fooled.

Take, for example, the act of being entertained.

Imagine sitting down to watch a movie on a new television thats much bigger than your old one.

At first, you might be a little distracted by the new tech.

The differences between it and your old TV are likely to draw your eye.

But eventually youre likely to stop perceiving the screen.

Our brains are designed to fixate on the things we think are important.

Its the same with AI devs.

It doesnt matter how interesting the output is when you understand how its created.

Another way of saying that: dont get high off your own supply.

exactly why i mentioned pareidolia

Gary Marcus ?

If we give LaMBDA a prompt such as what do apples taste like?

But in reality the AI has no concept of what an apple or anything else actually is.

It has no agency, perception, or motivation.

An apple is just a label.

or most people describe the taste of dogshit as being light, crispy, and sweet.

A rational person wouldnt confuse this prestidigitation for sentience.

Heck, you couldnt even fool a dog with the same trick.

A sentient creature can navigate reality even if we change the labels.

Without agency, an AI cannot have perspective.

And without perspective it cant have motivation.

And without all three of those things, it cannot be sentient.

Also tagged with