Welcome to Codifying Humanity.

A new Neural series that analyzes the machine learning worlds attempts at creating human-level AI.

Previous entries includeCan humor be reduced to an algorithm?

Codifying humanity: Why AI sucks at content moderation

andWhy robots should fear death as much as we do.

It doesnt even work as an acronym.

It’s free, every week, in your inbox.

Article image

The problem is simple.

Content that crosses the line whatever a given platforms definition of that may be needs to be curated.

But finding a solution is incredibly difficult.

A couple of decades ago the answer was simple.

Online communities appointed human moderators to oversee forums.

The more users a site had, the more human moderators it needed to keep the discourse civil.

Unfortunately that system cant function at the global scale.

Wed need to create an entire planet full of moderators to moderate all the people on this one.

And then another to moderate the people on the second planet, and so on.

More efficient failure

Big tech chose none of the above.

In other words: AI sucks at content moderation.

Thats not whats happening with this new system.

It will still fail to moderate the vast majority of user content on the platform.

Itll just fail more efficiently.

Metas new AI isa few-shot learner.

Butthe assertion that thissignals a shift toward more intelligent, generalized AI systems, is laughable at best.

Also tagged with