Did you knowNeural is taking the stage this fall?
Opening the black box.
Reducing the massive power consumption it takes to train deep learning models.

Unlocking the secret to sentience.
These are among the loftiest outstanding problems in artificial intelligence.
Whoever has the talent and budget to solve them will be handsomely rewarded with gobs and gobs of money.

We cant get the machines to stop being racist, xenophobic, bigoted, and misogynistic.
Nearly every big tech outfit and several billion-dollar non-profits are heavily invested in solvingAIs toxicity problem.
And, according to the latest study on the subject, were not really getting anywhere.
It’s free, every week, in your inbox.
The prob:Text generators, such as OpenAIs GPT-3, are toxic.
We all remember that time Googles AImistook a turtle for a gunright?
Thats very unlikely to happen now.
Computer visions gotten much better in the interim.
But progress has been less forthcoming in the field of NLP (natural language processing).
But this solution has its own problems.
The results were discouraging.
The researchers ran the intervention paradigms through their paces and compared their efficacy with that of human evaluators.
A group of paid study participants evaluated text generated by state-of-the-art text generators and rated its output for toxicity.
When the researchers compared the humans assessment to the machines, they found a large discrepancy.
Intervention techniques failed to accurately identify toxic output with the same accuracy as humans.
Quick take:This is a big deal.
Text generators are poised to become ubiquitous in the business world.
But if we cant make them non-offensive, they cant be deployed.
Thats Googles crack problem-solving team.
Its been unsuccessfully trying to solve this problem since 2016.
The near-future doesnt look bright for NLP.
it’s possible for you to read the whole paperhere.