Welcome to Codifying Humanity.
A new Neural series that analyzes the machine learning worlds attempts at creating human-level AI.
Read the first article:Can humor be reduced to an algorithm?

Kurzweils predicted the advent of over 100 technology advances witha greater than 85% success rate.
The problem
Machines have no impetus towards sentience.
40% off TNW Conference!
Our biological programming directives may never get resolved.
We live because the alternative is death and, for whatever reason, we have asurvival instinct.
As sentient creatures were aware of our mortality.
And its arguable that awareness is exactly what separates human intellect from animal intelligence.
including its human inhabitants, their motivations, habits, customs, behavior, etc.
the agent would need to fake all these.
That being said: how do we teach machines to understand their own mortality?
Were the only species that wars because were the only species capable of fearing war.
Start killing robots
Humans tend to learn through experience.
If the stove burns you, you probably wont touch it again.
AI learns through a similar process but it doesnt exploit learning in the same way.
you could write algorithms for finding dots, but algorithms dont execute themselves.
So you have to execute the algorithms and then adjust the AI based on the results you get.
And so on and so forth.
The AIsreasonfor doing this has nothing to do with wanting to find blue dots.
Humans arent programmed with hardcoded goals.
The only thing we know for certain is that death is imminent.
And, arguably, thats thesparkthat drives us towards accomplishing self-defined objectives.
Perhaps the only way to force an AGI to emerge is to develop an algorithm for artificial lifespans.
Many theories on AGIdismiss the idea of machine sentience altogether.
And perhaps those are the best ones to pursue.
I dont need a robot to like cooking, I just want it to make dinner.
Perhaps it will be superintelligent without ever experiencing self-awareness.