A lot of the articles discussed why deep neural networks are not sentient or conscious.
For humans, language is a means to communicate the complicated and multi-dimensional activations happening in our brains.
Think of language as a compressionalgorithmthat helps transfer the enormous information in the brain to another person.

Language is built on top of our shared experiences in the world.
Without those experiences, language has no meaning.
This is why language usually omits commonsense knowledge and information that interlocutors share.
In contrast, large language models have no physical and social experience.
How do transformers manage to make very convincing predictions?
They turn text into tokens and embeddings, mathematical representations of words in a multi-dimensional space.
With enough examples, these embeddings can create good approximations of how words should appear in sequences.
But the fundamental difference remains.
Neural networks process language by turning them into embeddings.
are still far from speaking our language.
For example, with enough training, you might be able to train a chimpanzee to ride a car.
But would you put it behind a steering wheel on a road that pedestrians will be crossing?
Likewise, a parrot can be taught many phrases.
But would you trust it to be your customer service agent?
We dont question their sentience or consciousness or personhood.
What matters is whether you could trust the person to think and decide as an average human would.
What do we know about LaMDA?
Well, for one thing, it doesnt sense the world as we do.
Its knowledge of language isnt built on the same kind of experiences as ours.
Natural language is in some ways different but also similar to all those other problems AI has solved.
On the other hand, they lack the human experience that comes with learning language.
They can be useful for solving well-defined language-related problems.
you’ve got the option to read the original articlehere.