So far, most misinformation flagged and unflagged has beenaimed at the general public.
As agraduate studentand asfacultymembersdoing research in cybersecurity, we studied a new avenue of misinformation in the scientific community.
General misinformation often aims to tarnish the reputation of companies or public figures.

This could put lives at risk.
To test this threat, we studied the impacts of spreading misinformation in the cybersecurity and medical communities.
We found that transformer-generated misinformation was able to fool cybersecurity experts.

It’s free, every week, in your inbox.
Much of the technology used to identify and manage misinformation is powered by artificial intelligence.
Although AI helps people detect misinformation, it has ironically also been used to produce misinformation in recent years.
Transformers can also be used for malevolent purposes.
Social networks like Facebook and Twitter have already faced the challenges ofAI-generated fake newsacross platforms.
Critical misinformation
Our research shows that transformers also pose a misinformation threat in medicine and cybersecurity.
To illustrate how serious this is, wefine-tunedthe GPT-2 transformer model onopen online sourcesdiscussing cybersecurity vulnerabilities and attack information.
We presented this generated description to cyberthreat hunters, who sift through lots of information about cybersecurity threats.
These professionals read the threat descriptions to identify potential attacks and adjust the defenses of their systems.

We were surprised by the results.
An example of AI-generated cybersecurity misinformation.
A similar transformer-based model can generate information in the medical domain and potentially fool medical experts.
They are not only being described in the press but are being used to make public health decisions.
An example of AI-generated health care misinformation.
An AI misinformation arms race?
If these automated systems process such false cybersecurity text, they will be less effective at detecting true threats.
Cybersecurity researchers continuously study ways to detect misinformation in different domains.
Understanding how to automatically generate misinformation helps in understanding how to recognize it.
For example, automatically generated information often has subtle grammatical mistakes that systems can be trained to detect.
Systems can also cross-correlate information from multiple sources and identify claims lacking substantial support from other sources.