State actors have been systematically exploiting ChatGPT for foreign propaganda campaigns. On June 5, 2025, Engadget reported that OpenAI has disrupted four Chinese covert influence operations using ChatGPT to generate social media posts and replies across TikTok, Facebook, Reddit, and X, while also identifying operations from Russia, Iran, and North Korea that combine influence tactics with social engineering and surveillance. The article begins:
Chinese propaganda and social engineering operations have been using ChatGPT to create posts, comments and drive engagement at home and abroad. OpenAI said it has recently disrupted four Chinese covert influence operations that were using its tool to generate social media posts and replies on platforms including TikTok, Facebook, Reddit and X. The comments generated revolved around several topics from US politics to a Taiwanese video game where players fight the Chinese Communist Party.
Key Points
- Chinese operations used ChatGPT to create contradictory social media posts supporting and opposing the same issues to deliberately stir misleading political discourse.
- OpenAI identified operations targeting diverse topics including US politics and a Taiwanese video game featuring battles against the Chinese Communist Party.
- Iranian actors previously used ChatGPT to create longform political articles for fake news sites posing as both conservative and progressive outlets during US elections.
- OpenAI’s Ben Nimmo noted that while AI tools are improving, “better tools don’t necessarily mean better outcomes” for influence campaign effectiveness.
ChatGPT & State Actor Propaganda- The New Frontline in Influence Operations
State actors are increasingly weaponizing ChatGPT and similar generative AI tools to power sophisticated influence operations, as evidenced by the exposure of a Russian bot farm that flooded Australian digital spaces with Kremlin-aligned narratives in an attempt to manipulate AI chatbot outputs and sow division ahead of national elections. Meanwhile, Chinese and Iranian networks have leveraged ChatGPT for surveillance, social engineering, and the mass production of multilingual propaganda, targeting both domestic and international audiences through coordinated campaigns on platforms like X, Facebook, and Reddit. These operations often blend automated content generation with human curation, maximizing reach and plausibility while evading detection. Recent OpenAI threat assessments and independent reporting confirm that such campaigns are not isolated: Chinese groups have utilized ChatGPT to create divisive social media posts, impersonate journalists, and even solicit sensitive information, while Russian and Iranian actors have exploited the technology to amplify disinformation and target global political processes. Although OpenAI and other tech firms have disrupted several of these networks, research indicates that as generative AI becomes more accessible and sophisticated, the scale and speed of propaganda operations will likely increase, posing ongoing challenges to information integrity and democratic resilience.
External References:
The Global Influence Operations Report (GIOR) utilizes AI throughout the posting process, including the generation of summaries for news items, introductions, key points, and often the “context” section. We recommend verifying all information before use. Additionally, images are AI-generated and intended solely for illustrative purposes. While they represent the events or individuals discussed, they should not be interpreted as real-world photography.