menu-close
IranJune 22 2025, 4:24 am

Iranian TikTok AI Disinformation Campaign: Five Propaganda Strategies Exposed

An Iran­ian Tik­Tok AI dis­in­for­ma­tion cam­paign has reached unprece­dent­ed sophis­ti­ca­tion in the wake of recent mil­i­tary exchanges with Israel. On June 20, 2025, the Inter­na­tion­al Insti­tute for Counter-Ter­ror­ism pub­lished an analy­sis by Daniel Haber­feld and Dr. Eitan Azani reveal­ing how Tehran deploys arti­fi­cial intel­li­gence across Tik­Tok in five dis­tinct pro­pa­gan­da strate­gies, span­ning mul­ti­ple lan­guages to fab­ri­cate Israeli destruc­tion while glo­ri­fy­ing Iran­ian lead­er­ship and mil­i­tary capa­bil­i­ties. The arti­cle begins:

12 JUNE 2025, Israel launched an oper­a­tion tar­get­ing Iran­ian nuclear facil­i­ties and mil­i­tary assets. In response, Iran ini­ti­at­ed mis­sile and drone attacks against Israel, though with lim­it­ed suc­cess com­pared to the scale and pre­ci­sion of the Israeli offen­sive. In response, Iran and its affil­i­ates have esca­lat­ed their efforts in the infor­ma­tion domain. This includes a surge in online pro­pa­gan­da, mis­in­for­ma­tion, coor­di­nat­ed influ­ence oper­a­tions, cyber-war­fare and AI-gen­er­at­ed disinformation—strategically deployed to shape pub­lic per­cep­tion and project strength. This report high­lights a seg­ment of Iran’s infor­ma­tion war­fare cam­paign since the war erupt­ed, focus­ing on the use of AI-gen­er­at­ed con­tent dis­trib­uted via Tik­Tok. Tik­Tok has emerged as a cen­tral plat­form in Iran’s dis­in­for­ma­tion cam­paign, due to its glob­al reach, pop­u­lar­i­ty among younger audi­ences, and algo­rith­mic pref­er­ence for emo­tion­al­ly engag­ing, visu­al con­tent. These fea­tures enable rapid dis­sem­i­na­tion of AI-gen­er­at­ed pro­pa­gan­da well beyond the imme­di­ate fol­low­er base. The campaign’s mul­ti­lin­gual nature—spanning Far­si, Ara­bic, Hebrew, Eng­lish, and mul­ti­ple East Asian languages—reflects a cal­cu­lat­ed strat­e­gy to influ­ence var­i­ous tar­get audi­ences. Ara­bic and Far­si con­tent often pro­motes region­al sol­i­dar­i­ty and anti-Israel sen­ti­ment; Hebrew-lan­guage videos focus on psy­cho­log­i­cal pres­sure with­in Israel; Eng­lish con­tent appeals to glob­al pub­lic opin­ion; and East Asian lan­guage mate­r­i­al may aim to broad­en expo­sure or exploit trend­ing nar­ra­tives in those regions.

Read more: https://ict.org.il/iranian-tiktok-campaign-seeks-to-shape-war-perceptions-using-ai/

Key Points

  • Iran fab­ri­cates wide­spread Israeli destruc­tion using AI to trans­form ordi­nary street images into scenes of dev­as­ta­tion, includ­ing fake footage of Tel Aviv’s Ben Guri­on Air­port and burn­ing EL AL aircraft.
  • AI-gen­er­at­ed con­tent ridicules Israeli lead­er­ship while glo­ri­fy­ing Supreme Leader Khamenei, show­ing sym­bol­ic sce­nar­ios where Khamenei dom­i­nates Netanyahu and Trump to project Iran­ian superiority.
  • The cam­paign oper­ates across mul­ti­ple lan­guages includ­ing Far­si, Ara­bic, Hebrew, Eng­lish, and East Asian lan­guages to tar­get dif­fer­ent region­al audi­ences with tai­lored messaging.
  • Dai­ly “Iran Attacks Israel Today” videos use AI to fab­ri­cate fic­tion­al mis­sile and air­craft attacks, while fake news about cap­tured Israeli pilots gets rein­forced with AI-gen­er­at­ed “evi­dence.”

TikTok’s Global Influence Operations: Campaigns and Control

Tik­Tok has become a crit­i­cal vec­tor for glob­al influ­ence oper­a­tions, with state and non-state actors exploit­ing its algo­rithm and reach to shape nar­ra­tives and manip­u­late pub­lic sen­ti­ment. Fol­low­ing bans in cer­tain regions, a strate­gic Red­Note pro­mo­tion cam­paign linked to the CCP sought to redi­rect Amer­i­can users to alter­na­tive plat­forms, illus­trat­ing the adapt­abil­i­ty of dig­i­tal influ­ence strate­gies. Roman­ian intel­li­gence agen­cies uncov­ered a mas­sive Tik­Tok-based elec­tion inter­fer­ence oper­a­tion that pro­pelled a pre­vi­ous­ly obscure far-right can­di­date, high­light­ing the platform’s sus­cep­ti­bil­i­ty to for­eign manipulation.

Con­cerns about cen­sor­ship and bias are fur­ther under­scored by dis­crep­an­cies in Tik­Tok con­tent on sen­si­tive top­ics relat­ed to Chi­na, while inves­ti­ga­tions reveal that hun­dreds of Tik­Tok and ByteDance employ­ees have direct con­nec­tions to Chi­nese state media. The platform’s algo­rithm has been shown to rapid­ly ampli­fy mis­in­for­ma­tion on the Ukraine war, and despite grass­roots efforts by young Ukraini­ans fight­ing dis­in­for­ma­tion on Tik­Tok, Tik­Tok has strug­gled to con­trol Krem­lin pro­pa­gan­da on its plat­form. Addi­tion­al­ly, Chi­na has qui­et­ly built net­works of Tik­Tok and Face­book influ­encers to push pro­pa­gan­da and even hired West­ern Tik­Tok­ers to pol­ish its image dur­ing major events like the 2022 Win­ter Olympics, rein­forc­ing the platform’s cen­tral­i­ty in mod­ern influ­ence campaigns.

Exter­nal analy­ses show that TikTok’s own trans­paren­cy reports reveal ongo­ing dis­rup­tions of covert influ­ence net­works, includ­ing those from Chi­na and Iran, tar­get­ing polit­i­cal dis­course in mul­ti­ple coun­tries. Despite these efforts, con­cerns per­sist about the platform’s vul­ner­a­bil­i­ty to manip­u­la­tion, giv­en its rapid growth and the geopo­lit­i­cal inter­ests of its par­ent company.

Exter­nal References:

Disclaimer

The Glob­al Influ­ence Oper­a­tions Report (GIOR) employs AI through­out the post­ing process, includ­ing gen­er­at­ing sum­maries of news items, the intro­duc­tion, key points, and often the “con­text” sec­tion. We rec­om­mend ver­i­fy­ing all infor­ma­tion before use. Addi­tion­al­ly, images are AI-gen­er­at­ed and intend­ed sole­ly for illus­tra­tive pur­pos­es. While they rep­re­sent the events or indi­vid­u­als dis­cussed, they should not be inter­pret­ed as real-world photography.