AI image generatorscould undermine upcoming elections in the worlds biggest democracies, according to new research.
Each of these countries will soon go to the ballot box.
The company tested three popular generative AI systems: Midjourney, DALL-E 2, and Stable Diffusion.

All of them have content moderation of some form, but the parameters are unclear.
Logically explored how these platforms could support disinformation campaigns.
40% off TNW Conference!

The research found that Midjourney had the strongest content moderation and produced the highest-quality images.
DALL-E 2 and Stable Diffusion had more limited moderation and generated inferior images.
Stable Diffusion accepted all the prompts.

Most of the images were far from photo-realistic.
But Logically says even crude pictures can be used in malicious capacities.
Logically has called for further content moderation on the platforms.
It also wants social media companies to be more proactive in tackling AI-generated disinformation.
Finally, the company recommends developing tools that identify malicious and coordinated behaviour.
Cynics may note that Logically could benefit from these measures.
Nonetheless, the research shows generativeAIcould amplify false election narratives.
Story byThomas Macaulay
Thomas is the managing editor of TNW.
He leads our coverage of European tech and oversees our talented team of writers.
Away from work, he e(show all)Thomas is the managing editor of TNW.
He leads our coverage of European tech and oversees our talented team of writers.
Away from work, he enjoys playing chess (badly) and the guitar (even worse).