menu-close
ChinaJune 11 2025, 9:32 am

OpenAI Reports Rising Threats From Chinese Malicious Use of ChatGPT

Mali­cious activ­i­ties by Chi­nese groups using Chat­G­PT have esca­lat­ed sig­nif­i­cant­ly, accord­ing to new threat intel­li­gence. On June 6, 2025, Reuters report­ed that Ope­nAI detect­ed an increas­ing num­ber of Chi­nese orga­ni­za­tions using its AI tech­nol­o­gy for covert oper­a­tions, includ­ing gen­er­at­ing polar­ized social media con­tent on divi­sive U.S. polit­i­cal top­ics and devel­op­ing tools for cyber attacks, though oper­a­tions remained gen­er­al­ly small-scale with lim­it­ed audi­ence reach. The arti­cle begins:

Ope­nAI is see­ing an increas­ing num­ber of Chi­nese groups using its arti­fi­cial intel­li­gence tech­nol­o­gy for covert oper­a­tions, which the Chat­G­PT mak­er described in a report released Thurs­day. While the scope and tac­tics employed by these groups have expand­ed, the oper­a­tions detect­ed were gen­er­al­ly small in scale and tar­get­ed lim­it­ed audi­ences, the San Fran­cis­co-based start­up said.

Read more: https://www.reuters.com/world/china/openai-finds-more-chinese-groups-using-chatgpt-malicious-purposes-2025–06-05/

Key Points

  • Chi­nese actors gen­er­at­ed social media posts crit­i­ciz­ing Tai­wan-cen­tric video games, mak­ing false accu­sa­tions against Pak­istani activists, and attack­ing USAID clo­sure decisions.
  • Threat actors used Chat­G­PT for cyber oper­a­tions includ­ing script mod­i­fi­ca­tion, pass­word brute forc­ing tool devel­op­ment, and social media automa­tion systems.
  • One oper­a­tion cre­at­ed polar­ized con­tent sup­port­ing both sides of divi­sive U.S. polit­i­cal top­ics, includ­ing AI-gen­er­at­ed pro­file images for fake accounts.
  • Chi­na’s for­eign min­istry denied the alle­ga­tions, stat­ing there is “no basis” for Ope­nAI’s claims about mis­use of AI technology.

ChatGPT and Influence Operations: Global Tactics, Threats, and Responses

Recent inves­ti­ga­tions have revealed that coor­di­nat­ed efforts from actors in Chi­na, Rus­sia, and Iran are sys­tem­at­i­cal­ly exploit­ing Chat­G­PT and sim­i­lar gen­er­a­tive AI tools to auto­mate pro­pa­gan­da, social engi­neer­ing, and sur­veil­lance across mul­ti­ple plat­forms. Ope­nAI has dis­rupt­ed sev­er­al Chi­nese pro­pa­gan­da and social engi­neer­ing oper­a­tions that used Chat­G­PT to gen­er­ate divi­sive social media con­tent in Eng­lish, Chi­nese, and Urdu, tar­get­ing top­ics from US pol­i­tics to Tai­wanese video games. These oper­a­tions often post­ed con­tra­dic­to­ry mes­sages to ampli­fy con­fu­sion and polarization.

Russ­ian and Iran­ian net­works have also har­nessed Chat­G­PT for mul­ti­lin­gual dis­in­for­ma­tion, with Russ­ian actors run­ning bot farms to poi­son AI sys­tems and manip­u­late elec­tion dis­course, and Iran­ian oper­a­tives cre­at­ing long-form arti­cles for fake news sites to sway US elec­tions. OpenAI’s efforts have includ­ed ban­ning entire sur­veil­lance and influ­ence net­works, while Chi­nese strate­gists con­tin­ue to devel­op AI-pow­ered infor­ma­tion war­fare tac­tics that com­bine auto­mat­ed con­tent with human curation.

These trends are echoed by U.S. intel­li­gence and inde­pen­dent research, which con­firm that AI is now a cat­a­lyst for elec­tion inter­fer­ence, enabling adver­saries to pro­duce and dis­trib­ute syn­thet­ic con­tent at scale and com­pli­cat­ing detec­tion and mit­i­ga­tion for demo­c­ra­t­ic societies.

Exter­nal References:

  1. Ope­nAI takes down covert oper­a­tions tied to Chi­na and oth­er countries

  2. Rus­sia, Iran and Chi­na are using AI in elec­tion inter­fer­ence efforts

  3. AI and vir­tu­al manip­u­la­tion in 2025 — EthicAI

Disclaimer

The Glob­al Influ­ence Oper­a­tions Report (GIOR) employs AI through­out the post­ing process, includ­ing gen­er­at­ing sum­maries of news items, the intro­duc­tion, key points, and often the “con­text” sec­tion. We rec­om­mend ver­i­fy­ing all infor­ma­tion before use. Addi­tion­al­ly, images are AI-gen­er­at­ed and intend­ed sole­ly for illus­tra­tive pur­pos­es. While they rep­re­sent the events or indi­vid­u­als dis­cussed, they should not be inter­pret­ed as real-world photography.