This is either the most advanced AI system in the entire known universe or its a total sham.

Unsurprisingly, its a sham: Theres little reason for excitement.

You dont even have to read the researchers paper to debunk their work.

Scientists say AI can tell your politics from a brain scan — here’s why that’s BS

All you need is the phrase politics change, and were done here.

But, just for fun, lets actually get intothe paperand explain how prediction models work.

It’s free, every week, in your inbox.

Article image

So, right off the bat: the AI isnt predicting or identifying politics.

Its forced to choose between the data in column A or the data in column B.

Lets say I sneak into the Ohio State University AI center and scramble all their data up.

This is because human political ideologies do not exist as ground truths.

There is no conservative brain or liberal brain.

Many people are neither or an amalgam of both.

Furthermore, many people who identify as liberal actually possess conservative views and mindsets, and vice versa.

So the first problem we run into is that the researchers do not define conservatism or liberalism.

What that means, ultimately, is that the data and labels have no respect for one another.

They must brute force an inference, so they do.

They may only choose from prescribed labels, so they do.

What is accuracy?

These experiments dont exactly pit humans against machines.

They really just establish two benchmarks and then conflate them.

The scientists will give multiple humans the prediction task one or two times (depending on the controls).

Then theyll give the AI the prediction task hundreds, thousands, or millions of times.

They have totrainthe AI.

Even though theyd have no clue why remember, this all happens in a black box.

Thats the best the team could do.

They couldnt tweak it any better than that.

Humans could achieve 100% accuracy at memorizing a binary in a database of 200, given enough time.

Benchmarking this particular prediction model is exactly as useful as measuring a tarot card readers accuracy.

Good research, bad framing

That isnt to say this research doesnt have merit.

I wouldnt talk shit about a research team dedicated to exposing the flaws inherent to artificial intelligence systems.

You dont get mad at a security researcher who discovers a problem.

Unfortunately, thats not how this research is framed.

This is, in my opinion, borderline quackery.

Furthermore, its results cannot be validated in any sense of the scientific method.

Well never know why or how the machine made any of its predictions.

We need research like this, to test the limits of exploitation when it comes to these predictive models.

But pretending this research has resulted in anything more sophisticated than the Not Hotdogapp is dangerous.

This isnt science, its prestidigitation with data.

Also tagged with