Now that we realize our brains can be hacked, we need an antivirus for the brain.

Those were the words of Yuval Noah Harari, famous historian and outspoken critic of Silicon Valley.

For instance, if youre watching NBA game recap videos, YouTube will recommend more NBA videos.

We need an ‘AI sidekick’ to fight malicious AI

This is basically the business model that all free apps use.

And they use the most advanced technologies and the most brilliant minds to achieve that goal.

40% off TNW Conference!

Article image

But how do you build the antivirus that Harari is speaking about?

It can work on the basis of the same technology, Harari said.

But this AI is serving you, has this fiduciary responsibility.

cyber-security-data-breach-hack

They have an abstract model of the world and a general perception of the consequences of human actions.

Unlike humans, AI algorithms start with a blank slate and have no notion of human experiences.

Currently, there is none.

However, fortunately, none of them have access to all our personal data.

Plus, theres still a lot of information that hasnt been digitized.

But how will they do that?

But that hasnt happened yet.

Now the question is, how do we give an AI agent all our data?

With current technology, youll need a combination of hardware and software.

Your AI assistant will also have to live in your computing devices, your smartphone, and laptop.

Itll then be able to record relevant data about all the activities youre carrying out online.

Putting all this data together, your AI sidekick will be better positioned to identify problematic patterns of behavior.

There are two problems with these requirements.

They wont be able to afford the AI sidekick.

The second problem is storing all the data you collect about the user.

Having so much information in one place can give you great insights into your behavior.

Who will you trust with your most sensitive data?

None of those companies have a positive record of having the best of their users interests in their mind.

Harari does mention that your AI sidekick has a fiduciary duty.

Should the government hold your data?

And whats to prevent government authorities from not using it for evil purposes such as surveillance and manipulation.

But that still doesnt remove the costs of storing the data.

The entity can be a non-profit that is backed with huge funding from government and the private sector.

Alternatively it can opt for a monetized business model.

An AI sidekick that can detect your weaknesses

This is where Hararis proposition hits its biggest challenge.

How can your sidekick distinguish whats good or bad for you?

The short answer is: It cant.

Distinguishing human weaknesses is anything but a narrow task.

There are too many parameters, too many moving parts.

Every person is unique in their own right, influenced by countless parameters and experiences.

A repeat task that might prove harmful for one person might be beneficial to another person.

Also, weaknesses might not necessarily present themselves in repeat actions.

Thats how AI-powered recommendation systems keep you engaged on Facebook, YouTube, and other online applications.

But distinguishing patterns doesnt necessarily lead to detecting weaknesses.

Thats the kind of stuff that requires human judgement,something that deep learning is sorely lacking.

Detecting human weakness is in the domain of general AI, also known as human-level or strong artificial intelligence.

In itself, this is a pretty interesting and productive use of current recommendation systems.

But well have to specify for our assistant what exactly hacking your brain is.

Therefore, blocking brain hacking attempts by malicious AI algorithms might not be as straightforward as blocking malware threats.

This could give you insights to influences youve absently picked up and you might not be aware of.

It can also help in areas where influence and brain hacking doesnt involve repeat actions.

Likewise, your AI assistant can tell how different topics are occupying your daily activities.

Final verdict

Hararis idea for an AI sidekick is an interesting idea.

At its heart, it suggests to upend current AI-based recommendation models to protect users against brain hacking.

However, as we saw, there are some real hurdles as to creating such a sidekick.

First, creating an AI system that can monitor all your activities is costly.

And second, protecting the human mind against harm is something that requires human intelligence.

That said, I dont suggest that AI cant help protect you against brain hacking.

With this in mind, you might create an AI agent that needs less data.

AI assistants can be a good tool in helping detect brain hacking and harmful online behavior.

But they cant replace human judgement.

Itll be up to you and your loved ones to decide whats best for you.

Also tagged with