OpenAI Blocks Iranian Influence Operation Using ChatGPT for U.S. Election Propaganda

OpenAI recently revealed that it had detected and shut down a network of accounts associated with an Iranian covert influence operation named Storm-2035. These accounts utilized ChatGPT to generate content, including commentary on the upcoming U.S. presidential election. Despite their efforts, the content generated minimal engagement on social media platforms.

The articles created by ChatGPT were published on various websites posing as progressive and conservative news outlets. The operation targeted individuals across the political spectrum by discussing topics such as U.S. politics, global events, the conflict in Gaza, and more.

Additionally, Microsoft highlighted Storm-2035 as an Iranian network actively engaging with U.S. voter groups on divisive issues. The group set up fake news and commentary sites like EvenPolitics, Nio Thinker, and others, using AI to plagiarize content from U.S. publications.

As part of their evolving tactics, the propaganda network has started using non-political posts and ads to deceive users. They mimic entertainment and health publications to redirect users to Russia-related articles on counterfeit domains.

Microsoft also warned of increased foreign influence activity targeting the U.S. election, with both Iranian and Russian networks involved in such operations.

Google’s Threat Analysis Group (TAG) identified and disrupted Iranian-backed spear-phishing attempts aimed at high-profile individuals in Israel and the U.S., including those associated with the U.S. presidential campaigns.

The phishing attacks conducted by the threat actor APT42 involve sophisticated social engineering techniques to lure targets into sharing their login information via malicious links.

Overall, these incidents underscore the ongoing challenges posed by malicious actors seeking to manipulate public opinion and interfere in political processes through deceptive online tactics.