Naturally, the question arises: What is the basis upon which something has rights?
What gives an entity moral standing?
The philosopherPeter Singerargues thatcreatures that can feel pain or suffer have a claimto moral standing.

He argues that nonhuman animals have moral standing since they can feel pain and suffer.
Limiting it to people would be a form of speciesism, something akin to racism and sexism.
It would require that Data can either feel pain or suffer.

And how you answer that depends on how you understand consciousness and intelligence.
As real artificial intelligence technology advances toward Hollywoods imagined versions, the question of moral standing grows more important.
It’s free, every week, in your inbox.

But it could not do anything else.
This computer had whats called domain-specific intelligence.
It is called domain-general intelligence.

Artificial general intelligence, AGI, is the term for machines that have domain-general intelligence.
Arguably no machine has yet demonstrated that kind of intelligence.
This summer, a startup calledOPENAIreleased a new version of itsGenerative Pre-Traininglanguage model.
Despite this impressive performance, GPT-3doesnt actually know anythingbeyond how to string words together in various ways.
AGI remains quite far off.
Named after pioneering AI researcher Alan Turing, theTuring testhelps determine when an AI is intelligent.
Can a person conversing with a hidden AI tell whether its an AI or a human being?
If he cant, then for all practical purposes, the AI is intelligent.
But this test says nothing about whether the AI might be conscious.
Two kinds of consciousness
There aretwo partsof consciousness.
First, theres the what-its-like-for-me aspect of an experience, the sensory part of consciousness.
Philosophers call this phenomenal consciousness.
Its about how you experience a phenomenon, like smelling a rose or feeling pain.
In contrast, theres also access consciousness.
I make the pass automatically, without conscious deliberation, in the flow of the game.
Blindsight nicely illustrates the differencebetween the two types of consciousness.
Data is an android.
How do these distinctions play out with respect to him?
Do Datas qualities grant him moral standing?
Data is also intelligent in the general sense.
He does a lot of distinct things at a high level of mastery.
Data has access consciousness.
He would clearly pass the Turing test.
He embodies a supersized version of blindsight.
Hes self-aware and has access consciousness can grab the pen but across all his senses he lacks phenomenal consciousness.
But Data might fulfill the other condition of being able to suffer, even without feeling pain.
Suffering might not require phenomenal consciousness the way pain essentially does.
Datas reduction in functioning that keeps him from saving his crewmate is a kind of nonphenomenal suffering.
He would have preferred to save the crewmate, and would be better off if he did.
Nor is it in question whether he is intelligent he easily demonstrates that he is in the general sense.
What is unclear is whether he is phenomenally conscious.
Should an AI get moral standing?
Data is kind he acts to support the well-being of his crewmates and those he encounters on alien planets.
He obeys orders from people and appears unlikely to harm them, and he seems toprotect his own existence.
For these reasons he appears peaceful and easier to accept into the realm of things that have moral standing.
But what aboutSkynetin theTerminatormovies?
Or the worries recently expressed byElon MuskaboutAI being more dangerous than nukes, and byStephen HawkingonAI ending humankind?
Human beings dont lose their claim to moral standing just because they act against the interests of another person.
There are no artificial general intelligence machines yet.
But now is the time to consider what it would take to grant them moral standing.