In this series we examine some of the most popular doomsday scenarios prognosticated by modernAIexperts.

Previous articles includeMisaligned Objectives,Artificial Stupidity,Wall-E Syndrome,Humanity Joins the Hivemind, andKiller Robots.

The democratization of expertise might sound like a good thing democracy, expertise, whats not to like?

A beginner’s guide to the AI apocalypse: The democratization of ‘expertise’

It’s free, every week, in your inbox.

A game of Trivial Pursuit is only as accurate as its database.

The correct answer is Barry Bonds with 73.

Article image

Now, lets extend that idea into a database that isnt curated by experts.

What does this have to do with AI?

Sometimes the wisdom of crowds is useful.

Article image

Such as, when youre trying to figure out what to watch next.

Whether its useful for large language models (LLMs) depends on how theyre used.

LLMs are a jot down of AI system used in a wide variety of applications.

And you dont have to look very far to imagine the possibilities.

Its a brittle, easily confused mess that more often spits out gibberish and thirsty lets be friends!

nonsense than anything coherent, but its pretty fun when the parlor trick works out just right.

At one point, the AI decided it was a woman.

At another, it decided that I was actually the actor Paul Greene.

All of this is reflected in its so-called Long Term Memory:

It also assigns me tags.

If we chat about cars, it might give me the tag likes cars.

But it doesnt assign itself tags for its own benefit.

It could pretend to remember things without pasting labels into its UI.Theyre for us.

Theyre ways Meta can make us feel connected to and even a little responsible for the chatbot.

Its MY BB3 bot, it remembers ME, and it knows what I have taught it!

Its a form of gamification.

You have to earn those tags (both yours and the AIs) by talking.

The truth of the matter is that were not training these LLMs to besmarter.

Were training them to be better at outputting text that makes us want them to output more text.

Is that a bad thing?

The problem is that BB3 was trained on a dataset thats so big we call it internet-sized.

It includes trillions of files that range from Wikipedia entries to Reddit posts.

Thats all in the database.

If someone said it on Reddit or Twitter, its probably been used to train the likes of BB3.

Despite this, Meta is designing it to imitate human trustworthiness and, apparently, to maintain our engagement.

At least we can fight killer robots.

Whats the worst that could happen?

We watched this play out to a small degree during the pandemic lockdowns.

Millions of people with no medical training decided to disregard medical advice based on their political ideology.

It teaches us to trust any idea as long as the crowd thinks it makes sense.

The democratization of expertise is what happens when everyone believes theyre an expert.

What happens when all the armchair experts get an AI companion to egg them on?

Also tagged with