menu-close
ChinaJune 6 2025, 11:58 am

OpenAI Disrupts Foreign Propaganda Campaigns Using ChatGPT

State actors have been sys­tem­at­i­cal­ly exploit­ing Chat­G­PT for for­eign pro­pa­gan­da cam­paigns. On June 5, 2025, Engad­get report­ed that Ope­nAI has dis­rupt­ed four Chi­nese covert influ­ence oper­a­tions using Chat­G­PT to gen­er­ate social media posts and replies across Tik­Tok, Face­book, Red­dit, and X, while also iden­ti­fy­ing oper­a­tions from Rus­sia, Iran, and North Korea that com­bine influ­ence tac­tics with social engi­neer­ing and sur­veil­lance. The arti­cle begins:

Chi­nese pro­pa­gan­da and social engi­neer­ing oper­a­tions have been using Chat­G­PT to cre­ate posts, com­ments and dri­ve engage­ment at home and abroad. Ope­nAI said it has recent­ly dis­rupt­ed four Chi­nese covert influ­ence oper­a­tions that were using its tool to gen­er­ate social media posts and replies on plat­forms includ­ing Tik­Tok, Face­book, Red­dit and X. The com­ments gen­er­at­ed revolved around sev­er­al top­ics from US pol­i­tics to a Tai­wanese video game where play­ers fight the Chi­nese Com­mu­nist Party.

Read more: https://www.engadget.com/ai/foreign-propagandists-continue-using-chatgpt-in-influence-campaigns-161509862.html

Key Points

  • Chi­nese oper­a­tions used Chat­G­PT to cre­ate con­tra­dic­to­ry social media posts sup­port­ing and oppos­ing the same issues to delib­er­ate­ly stir mis­lead­ing polit­i­cal discourse.
  • Ope­nAI iden­ti­fied oper­a­tions tar­get­ing diverse top­ics includ­ing US pol­i­tics and a Tai­wanese video game fea­tur­ing bat­tles against the Chi­nese Com­mu­nist Party.
  • Iran­ian actors pre­vi­ous­ly used Chat­G­PT to cre­ate long­form polit­i­cal arti­cles for fake news sites pos­ing as both con­ser­v­a­tive and pro­gres­sive out­lets dur­ing US elections.
  • Ope­nAI’s Ben Nim­mo not­ed that while AI tools are improv­ing, “bet­ter tools don’t nec­es­sar­i­ly mean bet­ter out­comes” for influ­ence cam­paign effectiveness.

ChatGPT & State Actor Propaganda- The New Frontline in Influence Operations

State actors are increas­ing­ly weaponiz­ing Chat­G­PT and sim­i­lar gen­er­a­tive AI tools to pow­er sophis­ti­cat­ed influ­ence oper­a­tions, as evi­denced by the expo­sure of a Russ­ian bot farm that flood­ed Aus­tralian dig­i­tal spaces with Krem­lin-aligned nar­ra­tives in an attempt to manip­u­late AI chat­bot out­puts and sow divi­sion ahead of nation­al elec­tions. Mean­while, Chi­nese and Iran­ian net­works have lever­aged Chat­G­PT for sur­veil­lance, social engi­neer­ing, and the mass pro­duc­tion of mul­ti­lin­gual pro­pa­gan­da, tar­get­ing both domes­tic and inter­na­tion­al audi­ences through coor­di­nat­ed cam­paigns on plat­forms like X, Face­book, and Red­dit. These oper­a­tions often blend auto­mat­ed con­tent gen­er­a­tion with human cura­tion, max­i­miz­ing reach and plau­si­bil­i­ty while evad­ing detec­tion. Recent Ope­nAI threat assess­ments and inde­pen­dent report­ing con­firm that such cam­paigns are not iso­lat­ed: Chi­nese groups have uti­lized Chat­G­PT to cre­ate divi­sive social media posts, imper­son­ate jour­nal­ists, and even solic­it sen­si­tive infor­ma­tion, while Russ­ian and Iran­ian actors have exploit­ed the tech­nol­o­gy to ampli­fy dis­in­for­ma­tion and tar­get glob­al polit­i­cal process­es. Although Ope­nAI and oth­er tech firms have dis­rupt­ed sev­er­al of these net­works, research indi­cates that as gen­er­a­tive AI becomes more acces­si­ble and sophis­ti­cat­ed, the scale and speed of pro­pa­gan­da oper­a­tions will like­ly increase, pos­ing ongo­ing chal­lenges to infor­ma­tion integri­ty and demo­c­ra­t­ic resilience.

Exter­nal References:

  1. Ope­nAI finds more Chi­nese groups using Chat­G­PT for mali­cious purposes

  2. For­eign pro­pa­gan­dists con­tin­ue using Chat­G­PT in influ­ence campaigns

  3. Ope­nAI takes down covert oper­a­tions tied to Chi­na and oth­er countries

Dis­claimer

The Glob­al Influ­ence Oper­a­tions Report (GIOR) uti­lizes AI through­out the post­ing process, includ­ing the gen­er­a­tion of sum­maries for news items, intro­duc­tions, key points, and often the “con­text” sec­tion. We rec­om­mend ver­i­fy­ing all infor­ma­tion before use. Addi­tion­al­ly, images are AI-gen­er­at­ed and intend­ed sole­ly for illus­tra­tive pur­pos­es. While they rep­re­sent the events or indi­vid­u­als dis­cussed, they should not be inter­pret­ed as real-world photography.