The upcoming year will be a busy one for democracy.
Major elections will take place in the US, the EU, and Taiwan, among others.
The study was conducted by Algorithm Watch and AI Forensics.

Microsoftshould acknowledge this, and recognize that flagging the generative AI content made by others is not enough.
Their tools, even when implicating trustworthy sources, produce incorrect information at scale.
Moreover, the incorrect information was often attributed to a source that had the correct information on the topic.

40% off TNW Conference!
However, rather than not respond, often it simply made up an answer including fabricating allegations about corruption.
The samples for the study were collected from 21 August 2023 to 2 October 2023.
However, one month on, new samples yielded similar results.
Microsofts press office was not available for a comment ahead of the publication of this article.
Meanwhile, they also urged users to apply their best judgement while reviewing Microsoft AI chatbot results.
Our research exposes the much more intricate and structural occurrence of misleading factual errors in general-purpose LLMs and chatbots.
you’ve got the option to find the study in its entiretyhere.
Story byLinnea Ahlgren
Linnea is the senior editor at TNW, having joined in April 2023.
Dabbles in gaming and fitness wearables.
But first, coffee.