Apparently weve finally run out of real things to be scared of.

Its kind of boring actually.

Dont get us wrong, the text generator called GPT-2 is pretty cool.

Who’s afraid of OpenAI’s big, bad text generator?

It can sometimes generate coherent blocks of text from a single phrase.

Its whereabouts are unknown.

Our top priority is to secure the theft and ensure it doesnt happen again.

Article image

The Nuclear Regulatory Commission did not immediately release any information.

Pretty cool, right?

None of the events in the AI-generated article actually happened; its easy to verify that its fake news.

Article image

But its impressive to see a machine riff like that.Impressive, not terrifying.

40% off TNW Conference!

The novel accomplishment here was not the text generator.

Article image

It was just having the resources available to train a bigger model than anyone else has before.

The result produced better text generation than the previous smaller model.

Heres what the headlines shouldve looked like: OpenAI improves machine learning model for text generator.

Its not sexy or scary, but neither is GPT-2.

Heres what the headlines actually looked like:

What the heck happened?

OpenAI took a fairly normal approach to revealing the GPT-2 developments.

We are not releasing the dataset, training code, or GPT-2 model weights.

OpenAI will further publicly discuss this strategy in six months, wrote Clark.

No researchers that I know of got to see the large model, but journalists did.

Yes, they intentionally blew it up.

The story immediately became about the decision to withhold the full model.

Few news outlets covered the researchers progress straight-up.

GPT-2s release spurred plenty of argument, but not the debate that Clark and OpenAI were likely hoping for.

Whether intentional or not, it manipulated the press.

Moreover, representatives stated that the concerns were more about AI-powered text generators ingeneral, not GPT-2specifically.

Those are two entirely different stories and they probably shouldnt have been conflated to the media.

We wont editorialize why OpenAI chose to do it that way, but the results speak for themselves.

Despite the fact that most of the actual reporting was quite deep, the headlines werent.

But its not true, theres nothing definitively dangerous about this particular text generator.

Just like Facebook never developed an AIso dangerous it had to be shut down after inventing its own language.

Its too late for damage-control, though OpenAIdidtry to set the record straight.

The detractors made their objections heard.

The entire 1:07:06 video can be viewedhere.

It hurts research, destroys media credibility, and distorts politicians views.

To paraphraseAnima Anandkumar: Im not worried about AI-generated fake news, Im worried about fake news about AI.

Also tagged with