Mikko Hypponen has spent decades on the frontlines of the fight against malware.

AI changes everything, Hypponen tells TNW on a video call.

The AI revolution is going to be bigger than the internet revolution.

Cybersecurity guru Mikko Hyppönen’s 5 most fearsome AI threats for 2024

As a self-described optimistic, the hacker hunter expects the revolution to leave a positive impact.

But hes also worried about the cyber threats it will unleash.

At the dawn of 2024, Hypponen revealed his five most pressing concerns for the year to come.

Article image

They come in no particular order although there is one thats causing the most sleepless nights.

It’s free, every week, in your inbox.

Not yet, anyway.

In recent months, however, their fears have started to materialise.

In the world of information warfare, fabricated videos are also advancing.

Deepfakes are also now emerging in simple cons.

Hypponen has only seen three so far but he expects this number to quickly proliferate.

As deepfakes become more refined, accessible, and affordable, their scale could expand rapidly.

To reduce the risk, he suggests an old-fashioned defence: safe words.

Picture a video call with colleagues or family members.

Thats what we should be taking away right now for 2024.

Deep scams

Despite resembling deepfakes in name, deep scams dont necessarily involve manipulated media.

In their case, the deep refers to the massive scale of the scam.

This is reached through automation, which can expand the targets from a handful to endless.

The techniques can turbocharge all manner of scams.

The conman stole an estimated $10 million from women he met online.

The pool of potential victims would be enormous.

You could be scamming 10,000 victims at the same time instead of three or four, Hypponen says.

Airbnb scammers can also reap the benefits.

Currently, they typically use stolen images from real listings to convince holidaymakers to make a booking.

Its a laborious process that can be foiled with a reverse image search.

With GenAI, those barriers no longer exist.

LLM-enabled malware

AI is already writing malware.

Hypponens team has discovered three worms that launch LLMs to rewrite code every time the malware replicates.

None have been found in real networks yet, but theyve been published in GitHub and they work.

Using an OpenAI API, the worms harness GPT to generate different code for every target it infects.

That makes them difficult to detect.

OpenAI can, however, blacklist the behaviour of the malware.

This is doable with the most powerful code-writing generative AI systems because they are closed source, Hypponen says.

They couldnt blacklist you anymore.

This is the benefit of closed-source generative AI systems.

The benefit also applies to image generator algorithms.

Offer open access to the code and watch your restrictions on violence, porn, and deception get dismantled.

With that in mind, its unsurprising that OpenAI is more closed than its name suggests.

Well, that and all the income they would lose to copycat developers, of course.

AI can detect these threats but it can also create them.

A student working at WithSecure has already demonstrated the threat.

The student then fully automated the process of scanning for vulnerabilities to become the local admin.

WithSecure decided to classify the thesis.

We didnt think it was responsible to publish the research, Hypponen says.

It was too good.

Automated malware

WithSecure has baked automation into its defences for decades.

That gives the company an edge over attackers, who still largely rely on manual operations.

For criminals, theres a clear way to shut the gap: fully automated malware campaigns.

That would turn the game into good AI versus bad AI, Hypponen says.

That game is set to start soon.

When it does, the results could be alarming.

So alarming that Hypponen ranks fully automated malware as the number one security threat for 2024.

Yet lurking around the corner is an even bigger threat.

The perilous path to AGI

Hypponen has a noted hypothesis about IoT security.

Known as Hypponen Law, the theory states that whenever an appliance is described as smart, its vulnerable.

If that law applies to superintelligent machines, we could get into some serious trouble.

Hypponen expects to witness the impact.

I think we will become the second most intelligent being on the planet during my lifetime, he says.

I dont think its going to happen in 2024.

But I think its going to happen during my lifetime.

That would add urgency to fears aboutartificial general intelligence.

To maintain human control of AGI, Hypponen advocates for strongalignmentwith our goals and needs.

The things we are building must have an understanding of humanity and share its long-term interests with humans…

The upside is huge bigger than anything ever but the downside is also bigger than anything ever.

Story byThomas Macaulay

Thomas is the managing editor of TNW.

He leads our coverage of European tech and oversees our talented team of writers.

Away from work, he e(show all)Thomas is the managing editor of TNW.

He leads our coverage of European tech and oversees our talented team of writers.

Away from work, he enjoys playing chess (badly) and the guitar (even worse).

Also tagged with