Malicious activities by Chinese groups using ChatGPT have escalated significantly, according to new threat intelligence. On June 6, 2025, Reuters reported that OpenAI detected an increasing number of Chinese organizations using its AI technology for covert operations, including generating polarized social media content on divisive U.S. political topics and developing tools for cyber attacks, though operations remained generally small-scale with limited audience reach. The article begins:
OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday. While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said.
Key Points
- Chinese actors generated social media posts criticizing Taiwan-centric video games, making false accusations against Pakistani activists, and attacking USAID closure decisions.
- Threat actors used ChatGPT for cyber operations including script modification, password brute forcing tool development, and social media automation systems.
- One operation created polarized content supporting both sides of divisive U.S. political topics, including AI-generated profile images for fake accounts.
- China’s foreign ministry denied the allegations, stating there is “no basis” for OpenAI’s claims about misuse of AI technology.
ChatGPT and Influence Operations: Global Tactics, Threats, and Responses
Recent investigations have revealed that coordinated efforts from actors in China, Russia, and Iran are systematically exploiting ChatGPT and similar generative AI tools to automate propaganda, social engineering, and surveillance across multiple platforms. OpenAI has disrupted several Chinese propaganda and social engineering operations that used ChatGPT to generate divisive social media content in English, Chinese, and Urdu, targeting topics from US politics to Taiwanese video games. These operations often posted contradictory messages to amplify confusion and polarization.
Russian and Iranian networks have also harnessed ChatGPT for multilingual disinformation, with Russian actors running bot farms to poison AI systems and manipulate election discourse, and Iranian operatives creating long-form articles for fake news sites to sway US elections. OpenAI’s efforts have included banning entire surveillance and influence networks, while Chinese strategists continue to develop AI-powered information warfare tactics that combine automated content with human curation.
These trends are echoed by U.S. intelligence and independent research, which confirm that AI is now a catalyst for election interference, enabling adversaries to produce and distribute synthetic content at scale and complicating detection and mitigation for democratic societies.
External References:
-
OpenAI takes down covert operations tied to China and other countries
-
Russia, Iran and China are using AI in election interference efforts
Disclaimer
The Global Influence Operations Report (GIOR) employs AI throughout the posting process, including generating summaries of news items, the introduction, key points, and often the “context” section. We recommend verifying all information before use. Additionally, images are AI-generated and intended solely for illustrative purposes. While they represent the events or individuals discussed, they should not be interpreted as real-world photography.